I'm seeking to scrape a web page using Playwright.
I load the page, and click the download button with Playwright successfully. This brings up a print dialog box with a printer selected.
I would like to select "Save as PDF" and then click the "Save" button.
Here's my current code:
with sync_playwright() as p:
browser = p.chromium.launch(headless=True)
playwright_page = browser.new_page()
got_error = False
try:
playwright_page.goto(url_to_start_from)
print(playwright_page.title())
html = playwright_page.content()
except Exception as e:
print(f"Playwright exception: {e}")
got_error = True
if not got_error:
soup = BeautifulSoup(html, 'html.parser')
#download pdf
with playwright_page.expect_download() as download_info:
playwright_page.locator("text=download").click()
download = download_info.value
path = download.path()
download.save_as(DOWNLOADED_PDF_FOLDER)
browser.close()
Is there a way to do this using Playwright?
Thanks very much to #KJ in the comments, who suggested that with headless=True, Chromium won't even put up a print dialog box in the first place.
Related
I´m using selenium to scrape a webpage and it finds the elements on the main page, but when I use the click() function, the driver never finds the elements on the new page. I used beautifulSoup to see if it´s getting the html, but the html is always from the main. (When I see the driver window it shows that the page is opened).
html = driver.execute_script('return document.documentElement.outerHTML')
soup = bs.BeautifulSoup(html, 'html.parser')
print(soup.prettify)
I´ve used webDriverWait() to see if it´s not loading but even after 60 seconds it never does,
element = WebDriverWait(driver, 60).until(EC.presence_of_element_located((By.ID, "ddlProducto")))
also execute_script() to check if by clicking the button using javascript loads the page, but it returns None when I print a variable saving the new page.
selectProducto = driver.execute_script("return document.getElementById('ddlProducto');")
print(selectProducto)
Also used chwd = driver.window_handles and driver.switch_to_window(chwd[1]) but it says that the index is out of range.
chwd = driver.window_handles
driver.switch_to.window(chwd[1])
I want to scrape this this for some of my natural language processing work. I have a subscription to the website but still, I am not able to get the result. I got the error that unable to locate the element.
The link to login page is login
This is the code that I tried in python with selenium.
options = webdriver.ChromeOptions()
options.add_argument('--ignore-certificate-errors')
options.add_argument('--incognito')
options.add_argument('--headless')
options.add_argument('--disable-blink-features=AutomationControlled')
options.add_argument('--disable-blink-features=AutomationControlled')
options.add_experimental_option('useAutomationExtension', False)
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_argument("disable-infobars")
driver = webdriver.Chrome("/usr/lib/chromium-browser/chromedriver", options=options)
driver.get('https://login.newscorpaustralia.com/login?state=hKFo2SBmOXc1TjRJNDlBX3hObkZPN1NsRWgzcktONTlPVnJMS6FupWxvZ2luo3RpZNkgUi1ZRmV2Z2dwcWJmZUpqdWtZdk5CUUllX0h3YngwanSjY2lk2SAwdjlpN0tvVzZNQkxTZmUwMzZZU1FUNzl6QThaYXo0WQ&client=0v9i7KoW6MBLSfe036YSQT79zA8Zaz4Y&protocol=oauth2&response_type=token%20id_token&scope=openid%20profile&audience=newscorpaustralia&site=couriermail&redirect_uri=https%3A%2F%2Fwww.couriermail.com.au%2Fremote%2Fidentity%2Fauth%2Flatest%2Flogin%2Fcallback.html%3FredirectUri%3Dhttps%253A%252F%252Fwww.couriermail.com.au%252Fsearch-results%253Fq%253Djason%252520huny&prevent_sign_up=true&nonce=7j4grLXRD39EVhGsxcagsO5c-PtAY4Md&auth0Client=eyJuYW1lIjoiYXV0aDAuanMiLCJ2ZXJzaW9uIjoiOS4xOS4wIn0%3D')
time.sleep(10)
elem = driver.find_element(by=By.CLASS_NAME,value='navigation_search')
username = driver.find_element(by=By.ID,value='1-email')
password = driver.find_element(by=By.NAME,value='password')
login = driver.find_element(by=By.NAME,value='submit')
username.send_keys("myid");
password.send_keys("password");
login.click();
time.sleep(20)
soup = BeautifulSoup(driver.page_source, 'html.parser')
search = driver.find_element(by=By.CSS_SELECTOR,value='form.navigation_search')
search.click();
search.send_keys("jason hunt");
print(driver.page_source)
Below is the error that I am getting. I want to grab the search icon and send the keys there but I am not getting the search form after login.
Below is the text based HTML of the element.
I tried printing the page source and I was not able to locate the html element there too.
Not a proper answer, but since you can't add formatting to comments and this has the same desired effect:
driver.get("https://www.couriermail.com.au/search-results");
WebDriverWait(driver, timeout=10).until(lambda d: d.find_element(By.CLASS_NAME, "search_box_input"))
searchBox = driver.find_element(By.CLASS_NAME, "search_box_input")
searchBox.send_keys("test");
I'm trying to enter text into a field (the subject field in the image) in a section using Selenium .
I've tried locating by Xpath , ID and a few others but it looks like maybe I need to switch context to the section. I've tried the following, errors are in comments after lines.
from selenium.webdriver import Firefox
from selenium.webdriver.firefox.options import Options
opts = Options()
browser = Firefox(options=opts)
browser.get('https://www.linkedin.com/feed/')
sign_in = '/html/body/div[1]/main/p/a'
browser.find_element_by_xpath(sign_in).click()
email = '//*[#id="username"]'
browser.find_element_by_xpath(email).send_keys(my_email)
pword = '//*[#id="password"]'
browser.find_element_by_xpath(pword).send_keys(my_pword)
signin = '/html/body/div/main/div[2]/div[1]/form/div[3]/button'
browser.find_element_by_xpath(signin).click()
search = '/html/body/div[8]/header/div[2]/div/div/div[1]/div[2]/input'
name = 'John McCain'
browser.find_element_by_xpath(search).send_keys(name+"\n")#click()
#click on first result
first_result = '/html/body/div[8]/div[3]/div/div[1]/div/div[1]/main/div/div/div[1]/div/div/div/div[2]/div[1]/div[1]/span/div/span[1]/span/a/span/span[1]'
browser.find_element_by_xpath(first_result).click()
#hit message button
msg_btn = '/html/body/div[8]/div[3]/div/div/div/div/div[2]/div/div/main/div/div[1]/section/div[2]/div[1]/div[2]/div/div/div[2]/a'
browser.find_element_by_xpath(msg_btn).click()
sleep(10)
## find subject box in section
section_class = '/html/body/div[3]/section'
browser.find_element_by_xpath(section_class) # no such element
browser.switch_to().frame('/html/body/div[3]/section') # no such frame
subject = '//*[#id="compose-form-subject-ember156"]'
browser.find_element_by_xpath(subject).click() # no such element
compose_class = 'compose-form__subject-field'
browser.find_element_by_class_name(compose_class) # no such class
id = 'compose-form-subject-ember156'
browser.find_element_by_id(id) # no such element
css_selector= 'compose-form-subject-ember156'
browser.find_element_by_css_selector(css_selector) # no such element
wind = '//*[#id="artdeco-hoverable-outlet__message-overlay"]
browser.find_element_by_xpath(wind) #no such element
A figure showing the developer info for the text box in question is attached.
How do I locate the text box and send keys to it? I'm new to selenium but have gotten thru login and basic navigation to this point.
I've put the page source (as seen by the Selenium browser object at this point) here.
The page source (as seen when I click in the browser window and hit 'copy page source') is here .
Despite the window in focus being the one I wanted it seems like the browser object saw things differently . Using
window_after = browser.window_handles[1]
browser.switch_to_window(window_after)
allowed me to find the element using an Xpath.
I have some code that downloads pdf files from a website but when I download the pdf files they are all corrupted, the pdfs appear to contain no data when I examine them in a hex editor. Any idea why?
EDIT - I have found that if I click on the link to the pdf it will load but if I attempt to open in a new tab or paste the url into a new tab it will give a blank output. The link has some javascript
onclick="var win = window.open(this.href,'','');return false;"
Code
pdf_links = []
box_2 = right_div.find_all("div", {"class":"right"})[2]#Contains PDF links
for link in box_2.find_all('a'):
current_link = link.get('href')
if current_link.endswith('pdf'):
pdf_links.append('http://' + set_domain + current_link)
for url in pdf_links:
response = requests.get(url)
with open(join('C:/Users/Ninja2k/Desktop', basename(url)), 'wb') as f:
f.write(response.content)
Within context manager, do close the file using f.close()
for url in pdf_links:
response = requests.get(url)
with open(join('C:/Users/Ninja2k/Desktop', basename(url)), 'wb') as f:
f.write(response.content)
f.close()
I'm trying to scrape reviews from this site:
https://www.bbb.org/sacramento/business-reviews/heating-and-air-conditioning/elite-heating-air-conditioning-in-elk-grove-ca-47012326/reviews-and-complaints
But the content of the reviews isn't been loaded with by scrapy.
I tried then to use selenium to push the button and load the content:
url = 'https://www.bbb.org/losangelessiliconvalley/business-reviews/plumbers/bryco-plumbing-in-chatsworth-ca-13096711/reviews-and-complaints'
driver_1 = webdriver.Firefox()
driver_1.get(url)
content = driver_1.page_source
REVIEWS_BUTTON = '//*[#class="button orange first"]'
button = driver_1.find_element_by_xpath(REVIEWS_BUTTON)
button.click()
But selenium isn't able to find the button from the above xapth, I'm getting the following error:
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: {"method":"xpath","selector":"//*[#class=\"button orange first\"]"}
Your button located inside an iframe, so you need to switch to it first and then handle the button:
REVIEWS_BUTTON = '//*[#class="button orange first"]'
driver_1.switch_to_frame('the_iframe')
button = driver_1.find_element_by_xpath(REVIEWS_BUTTON)
button.click()
driver.switch_to_default_content()