Selenium: Page JS not running, stuck in load - selenium

Running a scraper on a site and the js doesn't seem to be working. Stuck on "loading" on the page but the rest of the html is loading fine.
Screen shot of load.
https://i.stack.imgur.com/0WofB.png
from selenium import webdriver
bin_path = ''#path to local bin file for driver
driver = webdriver.Chrome(executable_path=bin_path,desired_capabilities={'javascriptEnabled': True})
driver.get(URL)
Not sure if I need to pull the name of each script and execute it one at a time?

Related

Can I install chromedriver on python anywhere and not use it headless?

I am trying to use this code that works on my local machine on python anywhere and i want to understand if it is even possible:
from selenium import webdriver
from bs4 import BeautifulSoup
import time
# Initialize webdriver
driver = webdriver.Chrome(executable_path="/Users/matteo/Downloads/chromedriver")
# Navigate to website
driver.get("https://apnews.com/article/prince-harry-book-meghan-royals-4141be64bcd1521d1d5cf0f9b65e20b5")
time.sleep(5)
# Parse page source
soup = BeautifulSoup(driver.page_source, "html.parser")
# Find desired elements using Beautiful Soup
elements = soup.find_all("p")
# Print element text
for element in elements:
print(element.text)
# Close webdriver
driver.quit()
Do i need to have installed chrome to make that work or is chromium enough? Because when i run that code on my local machine a chrome page opens up. How does that work on python anywhere? Would it crush?
I am wondering if the code i am using only works if someone is on a GUI with Chrome installed or if it can work on python anywhere too.
The short answer is no. ChromeDriver must have chrome installed. You can run your tests headless for time save, but chrome still must be installed.

Get updated tags of html in chromeDriver using selenium after click in C#

Working on Selenium automation testing after a click on button <ViewStatement> webpage gets updated with pdf link and some other tags but it is not updating into ChromeDriver driver object
I already tried like PageSource & hit same url into another ChromeDriver driver1 but still not working.
link - https://mfs.kfintech.com/
Code -
driver.FindElement(By.XPath("//input[#value='View Statement']")).Click();
before this click the page is completely loaded into the driver but after click page doesn't reloaded but still some tags are inserted into the same page with same link before and after click so i want whole page after click.
ViewStatement does not exist on link page after login it is there.

Python Selenium - open more Chrome apps

I'm using Selenium to automate certain stuff in Chrome and I know how to open multiple tabs, but is it possible to open Chrome itself multiple times?
Right now, when I want to open a new Chrome app the old one closes. I want it to stay open.
every time you want to open a new Chrome Browser you have to create a new instance of the webdriver.
from time import sleep
from selenium import webdriver
fist_driver = webdriver.Chrome(executable_path="/path/to/chromedriver")
fist_driver.get("https://google.com")
second_driver = webdriver.Chrome(executable_path="/path/to/chromedriver")
second_driver.get("https://ifconfig.me")
sleep(5)
# using for loop
for _ in range(2): # How much browser you want to open
driver = webdriver.Chrome(executable_path="/path/to/driver")
driver.get("https://google.com")
sleep(5)

Is there a way to add -Dchrome.switches chrome properties to serenity.properties or serenity.conf files?

I tried to add the below in the serenity.conf file to always load the chrome browser with these options but it fails to load the browser. When I pass in the below options via command line like so "gradle test -Dchrome.switches="--no-sandbox,--ignore-certificate-errors,--homepage=about:blank,--no-first-run" the browser starts successfully.
"-Dchrome.switches="--no-sandbox,--ignore-certificate-errors,--homepage=about:blank,--no-first-run"
Is there a way to always open chrome browser without having to pass this via command line or have the chrome driver as part of the framework?
serenity.conf
#
# WebDriver configuration
#
webdriver {
driver = chrome
autodownload = true
}
#headless.mode = true
serenity.test.root = java
#
# Chrome options can be defined using the chrome.switches property
#
chrome.switches = """--start-maximized;--test-type;--no-sandbox;--ignore-certificate-errors;
--disable-popup-blocking;--disable-default-apps;--disable-extensions-file-access-check;
--disable-web-security;--incognito;--disable-infobars,--disable-gpu,--homepage=about:blank,--no-first-run"""
Thanks!
Try
chrome {
switches = "--start-maximized;--enable-automation;--no-sandbox;--disable-popup-blocking;--disable-default-apps;--disable-infobars;--disable-gpu;--disable-extensions;"
preferences {
download: "{prompt_for_download: false,directory_upgrade: true,default_directory:'${user.dir}/downloaded-files'}"
}
}
Thank you!
I have switched to
#Managed
WebDriver driver;
https://serenity-bdd.github.io/theserenitybook/latest/web-testing-in-serenity.html#_a_simple_selenium_web_test
Below is the information on manned drivers in serenity.
Serenity reduces the amount of code you need to write and maintain when you write web tests. For example, it takes care of creating WebDriver instances, and of opening and closing the browser for you. The following is a very simple Selenium web test using Serenity:

How to use selenium in pandas to read a webpage?

I want to collect the information of a webpage using chromedriver. How do I install it and use it?
You have to install selenium first if you don't have it already. Then to use selenium:
from selenium.webdriver import Chrome
url="URL of the webpage you want to read"
setting up the driver
webdriver = "path of the chromedriver.exe file saved in your pc"
driver.get(url)
using css selector
y = driver.find_element_by_css_selector('css selector of the data you want to read from the webpage').text
print(y)
You don't install the chromedriver - you download the .exe (from here) and use the path to it in webdriver.Chrome(). This getting started page has a comprehensive guide:
from selenium import webdriver
driver = webdriver.Chrome('/path/to/chromedriver') # refers to the path where you saved the exe
driver.get('http://www.google.com/');
time.sleep(5) # Let the user actually see something!
search_box = driver.find_element_by_name('q')
search_box.send_keys('ChromeDriver')
search_box.submit()
time.sleep(5) # Let the user actually see something!
driver.quit()
Note: download the .exe that matches with your version of chrome!
(In Help > About Google Chrome)
As mentioned by #Patha_Mondal, you need to download the driver and select the elements you want to read. However, as your original question asks "How to use selenium in pandas to read a webpage?", I would say instead consider using Scrapy along with Selenium to create a ".csv" file from the Webpage Data.
Read the ".csv" data into pandas using pandas.read_csv() .
The data from the Webpage might not be clean or properly formatted. Using Scrapy to create a dataset out of it would be beneficial for reading it into pandas. Avoid using pandas directly in the same script as Selenium and Scrapy.
Hope it Helped.