Selenium WebDriver don't get new page after redirect - selenium

I Have this login page, so after pass keys and clicking login button, the webpage should redirect me to a new page with my session, and sometimes it does, but in most case, the driver.title by example, still having the title of the Authentication page and obiously this prevent me to find elements of the page that I'm looking for.
I Already try to driver.get(correct url) but didn't work.
Here are my WebDriver's options.
options = Options()
options.page_load_strategy = 'eager'
options.binary_location = '/opt/headless-chromium'
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--single-process')
options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Chrome('/opt/chromedriver', options=options)
driver.implicitly_wait(60)

options.page_load_strategy = 'eager'
Remove this, this will not wait for page load to finish . So you might be trying to get title before the page is loaded

Related

how to trace network when clicking that open new tab by selenium/webdriver

I am using selenium/webdriver to testing a web on Chrome,
I want to trace the network activity that happens after I click on all buttons, each clicking opens a new tab(i could not change anything to the buttons for it control by compressed javascript),
i tried Chrome Dev Tools: How to trace network for a link that opens a new tab? but it did not match my expect.
Below is a mock example, in the example i want to capture the new tab request "https://cdn.bootcss.com/jquery/3.2.1/jquery.min.js" but failed
(my actual scenario is that all web page opened in a android app, each click create a new tab/window)
import json
import time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
caps = {
"browserName": "chrome",
'goog:loggingPrefs': {'performance': 'ALL'}
}
options = Options()
options.add_experimental_option("w3c",False)
driver = webdriver.Chrome(desired_capabilities=caps, options=options)
# access a link
driver.get("https://www.google.com/")
# add a link for open new tab(just mock)
driver.execute_script('a = window.document.createElement("a");a.id="newtab";a.href="https://cdn.bootcss.com/jquery/3.2.1/jquery.min.js";a.target="_blank";a.text="33333";window.document.body.appendChild(a);')
time.sleep(5)
# click a button/link which open a new tab
element = driver.find_element_by_id('newtab')
driver.execute_script("arguments[0].click();", element)
time.sleep(3)
wins = driver.window_handles
driver.switch_to.window(wins[-1])
performance_log = driver.get_log('performance')
for packet in performance_log:
message = json.loads(packet.get('message')).get('message')
if message.get('method') != 'Network.responseReceived':
continue
requestId = message.get('params').get('requestId')
url = message.get('params').get('response').get('url')
try:
resp = driver.execute_cdp_cmd('Network.getResponseBody', {'requestId': requestId})
except BaseException as e:
resp = "error"
print("\n===============")
print(url)
# print(resp)

very simple selenium auto-click for webpage does not work

I am very new.
I was trying to make an auto mail sender for practice.
It opens website but not the login button.
There is nothing happen after it opened.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
import time
options = webdriver.ChromeOptions()
driver = webdriver.Chrome(
executable_path='/Users/aidencho/practice/chromedriver', options = options)
# open a website
url = 'https://naver.com'
driver.get(url)
# driver.maximize_window()
action = ActionChains(driver)
driver.find_element_by_css_selector('.link_login').click()
# driver.find_element_by_css_selector("#account > a").click()
# driver.find_element_by_class_name('.account > a').click()
One more thing.
I saw someone doing this, and there was a completed sentence for driver.find_element_by_css_selector part even he typed only driver.find.
Why not me?
Would there be a setting problem?
enter image description here
find_element_by_css_selector is deprecated. Please use find_element(by=By.CSS_SELECTOR, value=css_selector) instead.
driver.find_element(by="css selector", value='.link_login')

Selenium Chromedriver opening a blank page

When I open this URL with webdriver in selenium, I get a blank page with a 429 request. I haven't sent too many request as I only do one and it doesn't work. I've tried multiple solutions but can't manage to do it. Here is my code:
from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_argument("start-maximized")
# to supress the error messages/logs
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_argument("disable-blink-features=AutomationControlled")
options.add_experimental_option('useAutomationExtension', False)
driver = webdriver.Chrome(options=options, executable_path=r"C:\\Users\\pople\\OneDrive\\Desktop\\chromedriver.exe")
driver.get('https://www.bluenile.com/diamond-search')
You can access that page with undetected_chromedriver
import undetected_chromedriver as uc
browser = uc.Chrome()
url = 'https://www.bluenile.com/diamond-search'
browser.get(url)
You can install the package with pip install undetected-chromedriver. Make sure your chrome browser is up to date, and your chromedriver is compatible with it.

Unable to log in using selenium

I am trying to scrape this website using python's BeautifulSoup package and for automating the user flow I am using selenium. As this website requires authentication to access this page, I am trying to log in first using selenium webdriver. Here is my code:
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import TimeoutException
def configure_driver():
# Add additional Options to the webdriver
chrome_options = Options()
# add the argument and make the browser Headless.
chrome_options.add_argument("--headless")
# Instantiate the Webdriver: Mention the executable path of the webdriver you have downloaded
# For linux/Mac
# driver = webdriver.Chrome(options = chrome_options)
# For windows
driver = webdriver.Chrome(executable_path="/home/<user_name>/Downloads/chromedriver_linux64/chromedriver",
options = chrome_options)
return driver
def getLinks(driver):
# Step 1: Go to pluralsight.com, category section with selected search keyword
driver.get(f"https://www.coursera.org/learn/competitive-data-science/supplement/RrrDR/additional-material-and-links")
# wait for the element to load
try:
WebDriverWait(driver, 5).until(lambda s: s.find_element_by_class_name("_ojjigd").is_displayed())
except TimeoutException:
print("TimeoutException: Element not found")
return None
email = driver.find_element_by_name('email')
print(str(email))
password = driver.find_element_by_name('password')
email.send_keys("username") # provide some actual username
password.send_keys("password") # provide some actual password
form = driver.find_element_by_name('login')
print(form.submit())
WebDriverWait(driver, 10)
print(driver.title)
soup = BeautifulSoup(driver.page_source)
# Step 3: Iterate over the search result and fetch the course
divs = soup.findAll('div', args={'class': 'item-box-content'})
print(len(divs))
# create the driver object.
driver = configure_driver()
getLinks(driver)
# close the driver.
driver.close()
Now after doing form.submit() it is expected to log in and change the page, right? But it is simply staying in the same page, so I cannot access the contents of the authenticated page. Someone please help.
That is because there is no name attribute.
instead of this :
form = driver.find_element_by_name('login')
Use this :
wait.until(EC.element_to_be_clickable((By.XPATH, "//button[text()='Login']"))).click()
I tried this code on my local, seems to be working fine
driver.maximize_window()
wait = WebDriverWait(driver, 30)
driver.get("https://www.coursera.org/learn/competitive-data-science/supplement/RrrDR/additional-material-and-links")
wait.until(EC.element_to_be_clickable((By.ID, "email"))).send_keys("some user name")
wait.until(EC.element_to_be_clickable((By.ID, "password"))).send_keys("some user name")
wait.until(EC.element_to_be_clickable((By.XPATH, "//button[text()='Login']"))).click()
Since login button is in a form so .submit() should work too.
wait.until(EC.element_to_be_clickable((By.XPATH, "//button[text()='Login']"))).submit()
This works too.

AttributeError: module 'selenium.webdriver' has no attribute 'switch_to_alert'

I am making a simple crawler that can open a site and when a pop up appears, it should close it. but the following command isn't working.
from selenium import webdriver
browser = webdriver.Chrome(executable_path=r"C:\Program Files\chromedriver.exe")
url = "https://www.bnbaccessories.com/"
browser.get(url)
alert = webdriver.switch_to_alert().dismiss()
innerHTML = browser.execute_script("return document.body.innerHTML")
browser.implicitly_wait(50)
browser.close()
Use this
alert = browser.switch_to.alert.dismiss()
instead
webdriver.switch_to_alert().dismiss()
driver instance name is browser not webdriver