I am trying to login on Github using selenium-webdriver
whenever I am trying to run my python script GitHub ask me for the verification code to verify my device which leads to unsuccessful login because it is not possible for me to check my mail every time and verify my verification code every time. is there is something wrong with my code?
here is my code:
from selenium import webdriver
from importlib import reload
import time
import sys
reload(sys)
# Firefox used
driver = webdriver.Chrome()
# base url
driver.get("http://github.com/login")
username = driver.find_element_by_id("login_field")
password = driver.find_element_by_id("password")
# password and username need to go into these values
username.send_keys("username")
time.sleep(1)
password.send_keys("password")
time.sleep(1)
login_form = driver.find_element_by_xpath("//input[#value='Sign in']")
time.sleep(1)
login_form.click()
time.sleep(1)
# These are some of the most popular users on github
prepend = ["jashkenas", "ruanyf", "substack", "kennethreitz", "jlord", "daimajia", "mdo", "schacon", "mattt",
"sindresorhus", "defunkt", "douglascrockford", "mbostock", "jeresig",
"mojombo", "addyosmani", "paulirish", "vczh", "romannurik", "tenderlove", "chriscoyier", "johnpapa",
"josevalim",
"charliesome", "CoderMJLee", "ry", "antirez", "muan", "isaacs", "angusshire",
"hadley", "hakimel", "yyx990803", "fat", "fabpot", "ibireme", "tekkub",
"BYVoid", "laruence", "onevcat", "tpope", "mrdoob", "LeaVerou", "chrisbanes", "wycats", "lifesinger",
"cloudwu", "mitsuhiko", "michaelliao", "ryanb", "clowwindy", "JacksonTian", "yinwang0", "Trinea",
"pjhyett", "dhh", "gaearon"]
for user in prepend:
for t in range(1, 100):
string = "https://github.com/{}/followers?page={}".format(user, t)
driver.get(string)
time.sleep(1)
# make sure to pick the correct directory to save the files to
# follow_button = driver.find_elements_by_xpath("//button[#type='submit']")
follow_button = driver.find_elements_by_xpath("//button[#aria-label='Follow this person']")
# Once page is loaded this clicks all buttons for follow
try:
for i in follow_button:
i.submit()
except:
pass
time.sleep(1)
driver.close()
Related
I have an API which gets the success or error message on console.I am new to python and trying to read the response. Google throws so many examples to use subprocess but I dont want to run,call any command or sub process. I just want to read the output after below API call.
This is the response in console when success
17:50:52 | Logged in!!
This is the github link for the sdk and documentation
https://github.com/5paisa/py5paisa
This is the code
from py5paisa import FivePaisaClient
email = "myemailid#gmail.com"
pw = "mypassword"
dob = "mydateofbirth"
cred={
"APP_NAME":"app-name",
"APP_SOURCE":"app-src",
"USER_ID":"user-id",
"PASSWORD":"pw",
"USER_KEY":"user-key",
"ENCRYPTION_KEY":"enc-key"
}
client = FivePaisaClient(email=email, passwd=pw, dob=dob,cred=cred)
client.login()
In general it is bad practice to get a value from STDOUT. There are some ways but it's pretty tricky (it's not made for it). And the problem doesn't come from you but from the API which is wrongly designed, it should return a value e.g. True or False (at least) to tell you if you logged in, and they don't do it.
So, according to their documentation it is not possible to know if you're logged in, but you may be able to see if you're logged in by checking the attribute client_code in the client object.
If client.client_code is equal to something then it should be logged in and if it is equal to something else then not. You can try comparing it's value when you successfully login or when it fails (wrong credential for instance). Then you can put a condition : if it is None or False or 0 (you will have to see this by yourself) then it is failed.
Can you try doing the following with a successful and failed login:
client.login()
print(client.client_code)
Source of the API:
# Login function :
# (...)
message = res["body"]["Message"]
if message == "":
log_response("Logged in!!")
else:
log_response(message)
self._set_client_code(res["body"]["ClientCode"])
# (...)
# _set_client_code function :
def _set_client_code(self, client_code):
try:
self.client_code = client_code # <<<< That's what we want
except Exception as e:
log_response(e)
Since this questions asks how to capture "stdout" one way you can accomplish this is to intercept the log message before it hits stdout.
The minimum code to capture a log message within a Python script looks this:
#!/usr/bin/env python3
import logging
logger = logging.getLogger(__name__)
class RequestHandler(logging.Handler):
def emit(self, record):
if record.getMessage().startswith("Hello"):
print("hello detected")
handler = RequestHandler()
logger.addHandler(handler)
logger.warning("Hello world")
Putting it all together you may be able to do something like this:
import logging
from py5paisa import FivePaisaClient
email = "myemailid#gmail.com"
pw = "mypassword"
dob = "mydateofbirth"
cred={
"APP_NAME":"app-name",
"APP_SOURCE":"app-src",
"USER_ID":"user-id",
"PASSWORD":"pw",
"USER_KEY":"user-key",
"ENCRYPTION_KEY":"enc-key"
}
client = FivePaisaClient(email=email, passwd=pw, dob=dob,cred=cred)
class PaisaClient(logging.Handler):
def __init__():
self.loggedin = False # this is the variable we can use to see if we are "logged in"
def emit(self, record):
if record.getMessage().startswith("Logged in!!")
self.loggedin = True
def login():
client.login()
logging.getLogger(py5paisa) # get the logger for the py5paisa library
# tutorial here: https://betterstack.com/community/questions/how-to-disable-logging-from-python-request-library/
logging.basicConfig(handlers=[PaisaClient()], level=0, force=True)
c = PaisaClient()
c.login()
I am new to web-scraping.
I would like to follow all the links in Webpage_A. For each link, I need to select some xpath and download the data in an Excel file.
In the below code, I used
yield response.follow
to follow all the links in Webpage_A
In
def parse_table
I used webdriver and driver.get to access the webpages of the links.
Is there another way to access the webpages instead? Eg, SeleniumRequest?
I noted driver.implicitly_wait(120) doesn't work in letting Chrome remain open.
I have to use sleep. Why doesn't implicitly_wait work?
Finally, why does the Chrome browser auto-close although I didn't use driver.close()?
class testSpider2(scrapy.Spider):
name = 'test_2'
def start_requests(self):
yield SeleniumRequest(
url='CONFIDENTIAL',
wait_time=15,
screenshot=True,
callback=self.parse
)
def parse(self, response):
tables_name = response.xpath("//div[#class='contain wrap:l m-x:a']//li")
for t in tables_name:
name=t.xpath(".//a/span/text()").get()
link = t.xpath(".//a/#href").get()
if link :
yield response.follow(url=link, callback=self.parse_table, meta={'table_name': name, 'link_t': link})
def parse_table(self, response):
chrome_path = "C:/Users/chromedriver.exe"
driver = webdriver.Chrome(executable_path=chrome_path)
name = response.request.meta['table_name']
link_t = response.request.meta['link_t']
driver.get(link_t)
driver.implicitly_wait(120)
button_select=driver.find_element_by_xpath("(//a[text()='Select All'])").click()
button_st_yr=driver.find_element_by_xpath("//select[#name='ctl00$ContentPlaceHolder1$StartYearDropDownList'] /option[1]").click()
button_end_mth=driver.find_element_by_xpath("//select[#name='ctl00$ContentPlaceHolder1$EndMonthDropDownList']/option[text()='Dec']").click()
button_download=driver.find_element_by_xpath("//input[#id='ctl00_ContentPlaceHolder1_DownloadButton']").click()
time.sleep(120)
I have made a youtube automation bot. I am getting error : unable to locate element (for the Xpath of subscribe button)
here is my code
from selenium import webdriver
from selenium import common
from selenium.webdriver.common import keys
from webdriver_manager.firefox import GeckoDriverManager
import time
class actions:
def __init__(self, email, password):
self.email = email
self.password = password
profile = webdriver.FirefoxProfile()
profile.set_preference("dom.webdriver.enabled", False)
profile.set_preference('useAutomationExtension', False)
profile.update_preferences()
driver = webdriver.Firefox(
executable_path=GeckoDriverManager().install(), firefox_profile=profile)
self.bot = driver
# self.bot.maximize_window()
self.bot.set_window_size(400, 700)
self.is_logged_in = False
def login(self):
bot = self.bot
bot.get("https://accounts.google.com/signin/v2/identifier?service=youtube&uilel=3&passive=true&continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Den%26next%3Dhttps%253A%252F%252Fwww.youtube.com%252F&hl=en&ec=65620&flowName=GlifWebSignIn&flowEntry=ServiceLogin")
time.sleep(5)
try:
email = bot.find_element_by_name('identifier')
except common.exceptions.NoSuchElementException:
time.sleep(5)
email = bot.find_element_by_name('identifier')
email.clear()
email.send_keys(self.email + keys.Keys.RETURN)
time.sleep(5)
try:
password = bot.find_element_by_name('password')
except common.exceptions.NoSuchElementException:
time.sleep(5)
password = bot.find_element_by_name('password')
password.clear()
password.send_keys(self.password + keys.Keys.RETURN)
time.sleep(5)
self.is_logged_in = True
def kill(self):
bot = self.bot
bot.quit()
def subscribe(self, url):
if not self.is_logged_in:
return
bot = self.bot
bot.get(url)
time.sleep(4)
try:
value = bot.find_element_by_xpath(
'/html/body/ytd-app/div/ytd-page-manager/ytd-watch-flexy/div[5]/div[1]/div/div[7]/div[2]/ytd-video-secondary-info-renderer/div/div/div/ytd-subscribe-button-renderer/tp-yt-paper-button').get_attribute('aria-label')
value = value.split()
except:
bot.execute_script(
'window.scrollTo(0,document.body.scrollHeight/3.5)')
time.sleep(3)
value = bot.find_element_by_xpath(
'/html/body/ytd-app/div/ytd-page-manager/ytd-watch-flexy/div[5]/div[1]/div/div[7]/div[2]/ytd-video-secondary-info-renderer/div/div/div/ytd-subscribe-button-renderer/tp-yt-paper-button').get_attribute('aria-label')
value = value.split(':')
if value[0] == "Subscribe":
try:
bot.find_element_by_xpath(
'/html/body/ytd-app/div/ytd-page-manager/ytd-watch-flexy/div[5]/div[1]/div/div[7]/div[2]/ytd-video-secondary-info-renderer/div/div/div/ytd-subscribe-button-renderer/tp-yt-paper-button').click()
time.sleep(3)
except:
bot.execute_script(
'window.scrollTo(0,document.body.scrollHeight/3.5)')
time.sleep(3)
bot.find_element_by_xpath(
'/html/body/ytd-app/div/ytd-page-manager/ytd-watch-flexy/div[5]/div[1]/div/div[7]/div[2]/ytd-video-secondary-info-renderer/div/div/div/ytd-subscribe-button-renderer/tp-yt-paper-button').click()
time.sleep(3)
how can i resolve this issue. I am not able to understand where things are going wrong. Or i should try find elements by id or other ways instead of Xpath.
Or is there any problem with any software.
Please help me out
Always use relative XPath in your test. Using the absolute XPath will cause regular test failures.
Refer to this tutorial about writing the relative XPaths. https://www.guru99.com/xpath-selenium.html
This extension will help you to write the relative XPaths. https://chrome.google.com/webstore/detail/chropath/ljngjbnaijcbncmcnjfhigebomdlkcjo
You can refer how to write XPath in different ways using functions like text(), starts-with(), contains(). so you can locate them by visible texts also.
Refer this articlehere
In the past i often run in problems when a website is "lazy loading" -
It helped when i used such a search for the id
element = driver.find_element_by_id ("analyst-estimate")
driver.execute_script ("arguments[0].scrollIntoView();", element)
now i saw that this is not working for every site
on the following site everything works fine:
link = "https://www.gurufocus.com/stock/AAPL/summary"
options = Options ()
options.add_argument ('--headless')
options.add_experimental_option ('excludeSwitches', ['enable-logging'])
path = os.path.abspath (os.path.dirname (sys.argv[0]))
if platform == "win32": cd = '/chromedriver.exe'
elif platform == "linux": cd = '/chromedriver_linux'
elif platform == "darwin": cd = '/chromedriver'
driver = webdriver.Chrome (path + cd, options=options)
driver.get (link) # Read link
time.sleep (2) # Wait till the full site is loaded
element = driver.find_element_by_id ("analyst-estimate")
driver.execute_script ("arguments[0].scrollIntoView();", element)
time.sleep (1)
but on another site (also with an id - it is not working at all)
link = "https://finance.yahoo.com/quote/MSFT/analysis?p=MSFT"
options = Options ()
options.add_argument ('--headless')
options.add_experimental_option ('excludeSwitches', ['enable-logging'])
path = os.path.abspath (os.path.dirname (sys.argv[0]))
if platform == "win32": cd = '/chromedriver.exe'
elif platform == "linux": cd = '/chromedriver_linux'
elif platform == "darwin": cd = '/chromedriver'
driver = webdriver.Chrome (path + cd, options=options)
driver.get (link) # Read link
time.sleep (2) # Wait till the full site is loaded
element = driver.find_element_by_id ("YDC-Col1")
# element = driver.find_element_by_id ("Col2-4-QuoteModule-Proxy")
# element = driver.find_element_by_id ("app")
driver.execute_script ("arguments[0].scrollIntoView();", element)
time.sleep (1)
Why is this not working for the second website?
Its the exact same code - why is he not finding the id - it exists on the webpage?
There is a accept cookies pop up displayed before the page is loaded you have to click that first :
WebDriverWait(driver, 5).until(
EC.presence_of_element_located((By.NAME, "agree"))).click()
WebDriverWait(driver, 5).until(
EC.presence_of_element_located((By.ID, "YDC-Col1")))
Before testing something in headless check in non headless mode to see the actual behavior , and aif it fails only in headless take a screenshot to know the state of the website during the failure.
you can take screen shot as :
try:
link = "https://finance.yahoo.com/quote/MSFT/analysis?p=MSFT"
options = ChromeOptions()
options.add_argument('--headless')
options.add_experimental_option('excludeSwitches', ['enable-logging'])
driver = webdriver.Chrome(options=options)
driver.get(link) # Read link
time.sleep(2) # Wait till the full site is loaded
element = driver.find_element_by_id("YDC-Col1")
# element = driver.find_element_by_id ("Col2-4-QuoteModule-Proxy")
# element = driver.find_element_by_id ("app")
driver.execute_script("arguments[0].scrollIntoView();", element)
time.sleep(1)
except:
driver.get_screenshot_as_file("a.jpeg")
Time.sleep() isn't very stable for waiting for page loads. Switch over to webdriver waits. Also it doesn't seem to take 2 seconds to load.
wait = WebDriverWait(driver, 5)
wait.until(EC.presence_of_element_located((By.ID, "YDC-Col1")))
Also another issue could be using headless and not setting window size.
options.add_argument('--headless')
options.add_argument("--window-size=1920,1080")
Import
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
I have a program to download photos on various websites. Each url is formed at the end of the address by codes, which are accessed in a dataframe. In a dataframe of 8,583 lines
The sites have javascript, so I use selenium to access the src of the photos. And I download it with urllib.request.urlretrieve
Example of a photo site: http://divulgacandcontas.tse.jus.br/divulga/#/candidato/2018/2022802018/PB/150000608817
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd
from bs4 import BeautifulSoup
import time
import urllib.request, urllib.parse, urllib.error
# Root URL of the site that is accessed to fetch the photo link
url_raiz = 'http://divulgacandcontas.tse.jus.br/divulga/#/candidato/2018/2022802018/'
# Accesses the dataframe that has the "sequencial" type codes
candidatos = pd.read_excel('candidatos_2018.xlsx',sheet_name='Sheet1', converters={'sequencial': lambda x: str(x), 'cpf': lambda x: str(x),'numero_urna': lambda x: str(x)})
# Function that opens each page and takes the link from the photo
def pegalink(url):
profile = webdriver.FirefoxProfile()
browser = webdriver.Firefox(profile)
browser.get(url)
time.sleep(10)
html = browser.page_source
soup = BeautifulSoup(html, "html.parser")
browser.close()
link = soup.find("img", {"class": "img-thumbnail img-responsive dvg-cand-foto"})['src']
return link
# Function that downloads the photo and saves it with the code name "cpf"
def baixa_foto(nome, url):
urllib.request.urlretrieve(url, nome)
# Iteration in the dataframe
for num, row in candidatos.iterrows():
cpf = (row['cpf']).strip()
uf = (row['uf']).strip()
print(cpf)
print("-/-")
sequencial = (row['sequencial']).strip()
# Creates full page address
url = url_raiz + uf + '/' + sequencial
link_foto = pegalink(url)
baixa_foto(cpf, link_foto)
Please I look guidance for:
Put a try-Exception type to wait for the page to load (I'm having errors reading the src - after many hits the site takes more than ten seconds to load)
And I would like to record all possible errors - in a file or dataframe - to write down the "sequencial" code that gave error and continue the program
Would anyone know how to do it? The guidelines below were very useful, but I was unable to move forward
I put in a folder a part of the data I use and the program, if you want to look: https://drive.google.com/drive/folders/1lAnODBgC5ZUDINzGWMcvXKTzU7tVZXsj?usp=sharing
put your code within :
try:
WebDriverWait(browser, 30).until(wait_for(page_has_loaded))
# here goes your code
except: Exception
print "This is an unexpected condition!"
For waitForPageToLoad :
def page_has_loaded():
page_state = browser.execute_script(
'return document.readyState;'
)
return page_state == 'complete'
30 above is time in seconds. You can adjust it as per your need.
Approach 2 :
class wait_for_page_load(object):
def __init__(self, browser):
self.browser = browser
def __enter__(self):
self.old_page = self.browser.find_element_by_tag_name('html')
def page_has_loaded(self):
new_page = self.browser.find_element_by_tag_name('html')
return new_page.id != self.old_page.id
def __exit__(self, *_):
wait_for(self.page_has_loaded)
def pegalink(url):
profile = webdriver.FirefoxProfile()
browser = webdriver.Firefox(profile)
browser.get(url)
try:
with wait_for_page_load(browser):
html = browser.page_source
soup = BeautifulSoup(html, "html.parser")
browser.close()
link = soup.find("img", {"class": "img-thumbnail img-responsive dvg-cand-foto"})['src']
except Exception:
print ("This is an unexpected condition!")
print("Erro em: ", url)
link = "Erro"
return link