Firefox is opening a blank website when using Selenium web driver (geckodriver) for this handy tool here
Anyone an idea how to fix this?
Related
How do I launch a browser with all the user data(history, cookies & etc.) in python selenium web driver?
You can load existing browser profile while opening a selenium web driver. For Chrome see here, for FireFox see similar solutions
I am using Firefox 55.0.2 (64-bit) with selenium. The code works fine, however I see an icon in Firefox saying Browser is under remote control and the URL is highlighted in orange as shown in below screen shot.
Is there a way to disable this message in Firefox?
You wouldn't get that in the previous versions of firefox (version 45 or below). Its just an update and cannot be disabled. Even google chrome shows a message(chrome is autmated by a software). or you could just use the previous versions of the browser.
Maybe not related directly to selenium use,
But in case of getting stuck in "under remote control" after having launched firefox with a kind of:
firefox -marionette -foreground -no-remote -profile /path/to/existing/profile
It's because this way it toggles marionette:true in about:config,
Thanks to Firefox Stuck in Remote Control Mode for pointing to this marionette config entry
I'm trying to make a web crawler that click on ads (yes, i know), it's very sophisticated, but, I realise that Google Ads aren't showed when javascript is disabled. Today, i use Mechanize, and it doesn't "accept" javasript.
I heard selenium use another system to crawl the net.
The only thing I want to do is access my page, and click on the ad (generated by javascript).
Can Selenium do it ?
Selenium is a browser automation tool. You can basically automate everything you can do in your browser. Start with going through the Getting Started section of the documentation.
Example:
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("http://www.python.org")
print driver.title
driver.close()
Besides automating common browsers, like Chrome, Firefox, Safari or Internet Explorer, you can also use PhantomJS headless browser.
I installed the selenium plugin in Firefox for automating a monitoring process. When i record the test case and rerun it it fails for pop up validation in the end. So i'm i'm confirming the monitoring as successful when i got a pop up at the end saying "successful". So i record the test case using selenium and when i rerun it at the end when its waiting for pop up to come two things are happening
the pop up in the record playback its not coming up.
the test case is failing for the pop up.
Please also suggest if i can use IE with selenium
Here You can check very good and easy example for how to handle javascript popup using selenum web driver : Selenium Webdriver for Javascript popup
To use IE with selenium web driver do following :
1 - Download IE driver of selenium web driver from HERE
2 - You can all IE driver from selenium web driver as per below :
System.setProperty("webdriver.ie.driver", "D:/IEDriver.exe");
InternetExplorerDriver IEDriver=new InternetExplorerDriver();
Note : In above code "D:/IEDriver.exe" is example path , you please set actual path where you have put your IE drivers.
I am trying to scrape a website that contains images using a headless Selenium.
Initially, the website populates 50 images. If you scroll down more and more images are loaded.
Windows 7 x64
python 2.7
recent install of selenium
[1] Non-Headless
Navigating to the website with selenium as follows:
from selenium import webdriver
browser = webdriver.Firefox()
browser.get(url)
browser.execute_script('window.scrollBy(0, 10000)')
browser.page_source
This works (if anyone has a better suggestion please let me know).
I can continue to scrollBy() until I reach the end and then pull the source page.
[2] Headless with HTMLUNIT
from selenium import webdriver
driver = webdriver.Remote(desired_capabilities=webdriver.DesiredCapabilities.HTMLUNIT)
driver.get(url)
I cannot use scrollBy() in this headless environment.
Any suggestions on how to scrape this kind of page?
Thanks
One option is to study the JavaScript to see how it calculates what to load next. Then implement that logic in your scraping client instead. Once you have done that, you can use faster scraping tools like Perl's WWW::Mechanize.
You need to enable JavaScript explicitly when using the HtmlUnit Driver:
driver.setJavascriptEnabled(true);
According to [http://code.google.com/p/selenium/wiki/HtmlUnitDriver](the docs), it should emulate IE's JavaScript handling by default.
When I tried the same method, I got error messages that selenium crashed while connecting java to simulate javascript.
I wrote the script into execute_script method then the code works well.
I guess the communication between selenium and java server part is not configured properly.
Enabling the javascript with HTMLUNITDRIVERWITHJS is possible and quick ;)