How do I Extract ZIP code from a Website using selenium python [closed] - selenium

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I am new to python selenium i want to extract
https://tools.keycdn.com/geo
tools.keycdn.com/geo
I need to extract only postal code i.e 10080
only zipcode not any other text
and print it out on screen

There is a cookies accept button, so you need to accept that first, and then using //dt[text()='Postal code']//following-sibling::dd xpath you can extract the postal code. See below
Code:
driver = webdriver.Chrome(driver_path)
driver.maximize_window()
driver.get("https://tools.keycdn.com/geo")
wait = WebDriverWait(driver, 20)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[class*='alert-cookies']"))).click()
postal_code = wait.until(EC.visibility_of_element_located((By.XPATH, "//dt[text()='Postal code']//following-sibling::dd"))).text
print(postal_code)
Imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

Related

The output from my selenium script is blank, how do I fix?

First time using selenium for web scraping a website, and I'm fairly new to python. I have tried to scrape a Swedish housing site to extract price, address, area, size, etc., for every listing for a specific URL that shows all houses for sale in a specific area called "Lidingö".
I managed to bypass the pop-up window for accepting cookies.
However, the output I get from the terminal is blank when the script runs. I get nothing, not an error, not any output.
What could possibly be wrong?
The code is:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
s = Service("/Users/brustabl1/hemnet/chromedriver")
url = "https://www.hemnet.se/bostader?location_ids%5B%5D=17846&item_types%5B%5D=villa"
driver = webdriver.Chrome(service=s)
driver.maximize_window()
driver.implicitly_wait(10)
driver.get(url)
# The cookie button clicker
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "/html/body/div[62]/div/div/div/div/div/div[2]/div[2]/div[2]/button"))).click()
lists = driver.find_elements(By.XPATH, '//*[#id="result"]/ul[1]/li[1]/a/div[2]')
for list in lists:
adress = list.find_element(By.XPATH,'//*[#id="result"]/ul[1]/li[2]/a/div[2]/div/div[1]/div[1]/h2')
area = list.find_element(By.XPATH,'//*[#id="result"]/ul[1]/li[1]/a/div[2]/div/div[1]/div[1]/div/span[2]')
price = list.find_element(By.XPATH,'//*[#id="result"]/ul[1]/li[1]/a/div[2]/div/div[2]/div[1]/div[1]')
rooms = list.find_element(By.XPATH,'//*
[#id="result"]/ul[1]/li[1]/a/div[2]/div/div[2]/div[1]/div[3]')
size = list.find_element(By.XPATH,'//*[#id="result"]/ul[1]/li[1]/a/div[2]/div/div[2]/div[1]/div[2]')
print(adress.text)
There are a lot of flaws in your code...
lists = driver.find_elements(By.XPATH, '//*[#id="result"]/ul[1]/li[1]/a/div[2]')
in your code this returns a list of elements in the variable lists
for list in lists:
adress = list.find_element(By.XPATH,'//*[#id="result"]/ul[1]/li[2]/a/div[2]/div/div[1]/div[1]/h2')
area = list.find_element(By.XPATH,'//*[#id="result"]/ul[1]/li[1]/a/div[2]/div/div[1]/div[1]/div/span[2]')
price = list.find_element(By.XPATH,'//*[#id="result"]/ul[1]/li[1]/a/div[2]/div/div[2]/div[1]/div[1]')
rooms = list.find_element(By.XPATH,'//*
[#id="result"]/ul[1]/li[1]/a/div[2]/div/div[2]/div[1]/div[3]')
size = list.find_element(By.XPATH,'//*[#id="result"]/ul[1]/li[1]/a/div[2]/div/div[2]/div[1]/div[2]')
print(adress.text)
you are not storing the value of each address in a list, instead, you are updating its value through each iteration.And xpath refers to the exact element, your loop is selecting the same element over and over again!
And scraping text through selenium is a bad practice, use BeautifulSoup instead.

other webelement is also treated as List<WebElement> [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
After accessing the webelements of List. Other webelement is also treated as List webelement.
List<WebElement> BrandTerms =driver.findElements(BrandTerm);
js = (JavascriptExecutor) driver;
for(int i=0;i<=1;i++)
{
js.executeScript("arguments[0].value='"+Bandtermsvalue+i + "'", BrandTerms.get(i));
}
js=null;
driver.findElements(By.id("btnAddBrandedTerms")).click();
Why is this webelement is treated as list element with message "click() is undefined for the type List"
alternatively You may extract and click on the first found
List<WebElement> BrandTerms =driver.findElements(BrandTerm);
js = (JavascriptExecutor) driver;
for(int i=0;i<=1;i++)
{
js.executeScript("arguments[0].value='"+Bandtermsvalue+i + "'", BrandTerms.get(i));
}
js=null;
driver.findElements(By.id("btnAddBrandedTerms")).get(0).click();
OR
driver.findElement(By.id("btnAddBrandedTerms")).click();
Please see webdriver API specs
https://seleniumhq.github.io/selenium/docs/api/java/org/openqa/selenium/remote/server/handler/FindElement.html
and
https://seleniumhq.github.io/selenium/docs/api/java/org/openqa/selenium/remote/server/handler/FindElements.html
Also highly recommending to double check selenium top tips and tricks and try that out in your project.
Hope this helps.
findElements returns a list of all the elements which match given xpath. Whereas, findElement returns first matching element using the xpath. But, be careful as if there is no element matching the given xpath, you may face exception.
To solve the issue you are facing, instead of below line:
driver.findElements(By.id("btnAddBrandedTerms")).click();
Please use this line of code:
driver.findElement(By.id("btnAddBrandedTerms")).click();
Hope it helps.

How to extract couple of tables from site using selenium

Greeting all
i am trying to extract tables from this site https://theunderminejournal.com/#eu/silvermoon/category/battlepets but i am having some difficulties with that. my code and whatever i used failed to bring up any result:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
import time
def getbrowser():
options = Options()
options.add_argument("--disable-extensions")
#options.add_argument('--headless')
driver = webdriver.Chrome(options=options)
return driver
def scrape(): # create scrape engine from scratch
driver = getbrowser()
start = time.time()
site1="https://theunderminejournal.com/#eu/silvermoon/category/battlepets"
driver.get(site1)
time.sleep(10)
tbody = driver.find_element_by_tag_name("table")
#cell = tbody.find_elements_by_tag_name("tr").text
for tr in tbody:
td = tbody.find_elements_by_tag_name("tr")
print (td)
driver.close()
scrape()
my goal is to extract the name and the first price from each pet (from all the tables) and create a table with these two values.
generally i am building a scrape bot that will compare the prices from two servers....
i know that my scraping skills are too low , can you please point me where i could find something to read or watch to improve myself.
thanks again for your time
Get all the names and prices in 2 lists, and use their value in order, just replace the print command with whatever you want
names = driver.find_elements_by_css_selector("[class='name'] a")
prices = driver.find_elements_by_css_selector(":nth-child(4)[class='price'] span")
i = 0
for x in names
print (x.text)
print (prices[i].text)
i+=1
hope it helps.

What is the xpath of gmail password field in firefox browser [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I am trying to automate gmail in firefox browser using selenium webdriver and firebug.
Selenium is not identify the xpath of the password fileld.
what is the xpath of the password field.
Try below mentioned xpath:
//*[#name="password"]
I suggest you to verify the xpath on Console of Chrome Browser.
If your application supports Chrome with below mentioned syntax
$x("//*[#name='password']")
While you try to log into your Gmail Account, on filling up the EmailID/Phone field with text and simultaneously click on the Next button, the text field for Password takes a delta amount of time to be clickable/interactable within the Viewport. Hence apart from just locating the xpath for Password field you have to induce some Explicit Wait i.e. WebDriverWait as follows:
WebDriverWait wait = new WebDriverWait(driver, 20);
WebElement password = wait.until(ExpectedConditions.elementToBeClickable(By.xpath("//input[#name='password']")));
password.sendKeys("your_password");
Try the below xpath :
By.xpath(.//*[#id='password']/div[1]/div/div[1]/input)
Use this :
//input[#type='password']
else you can locate like
driver.findElement(By.name("password"));
Use this xpath:
//INPUT[#type='password']
Try this one. firebug is outdated I would suggest you use other softwares for more complex xpaths.
use this xpaths
//input[#class='whsOnd zHQkBf'][#name='password']
//input[#type='password']
//*[#type='password']
//input[contains(#aria-label,'Enter your password')][#name='password']
//input[contains(#aria-label,'Enter your password')][#autocomplete='current-password']
Instead of using xpath you should use name as it have priority over xpath like this:
System.setProperty("webdriver.chrome.driver", "E:\\software and tools\\chromedriver_win32\\chromedriver.exe");
WebDriver chrome_driver = new ChromeDriver();
chrome_driver.findElement(By.name("password")).sendKeys("xxxxxxxxxx");
or you can use any xpath
chrome_driver.findElement(By.xpath("xpath you want")).sendKeys("xxxxxxxxxx");
Use any of below XPath
//input[#type='password']
OR
//input[#name='password']
OR
//input[contains(#aria-label,'Enter your password')]
OR
//input[contains(#autocomplete,'current-password')]
The complete code will like below :-
driver.manage().timeouts().implicitlyWait(60, TimeUnit.SECONDS);
driver.findElement(By.name("identifier")).sendKeys("xxx#gmail.com");
driver.findElement(By.xpath("//span[contains(.,'Next')]")).click();
driver.findElement(By.name("password")).sendKeys("123456");
WebDriverWait wait = new WebDriverWait(driver, 10);
WebElement element = wait.until(ExpectedConditions.presenceOfElementLocated(By.xpath("//span[contains(.,'Next')]")));
element.click();

Selenium xpath each element 5 [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I use selenium and xpath selector. I know how to get a certain element. But how to use xpath to get each element 5?
You can always solve it in your language of choice.
Python
For example, in Python, the extended slicing allows to do it rather simply:
driver.find_elements_by_xpath("//table/tr")[0::5]
You can also use the position() function and the mod operator:
//table/tr[position() mod 5 = 0]
driver.find_elements_by_xpath("//table/tr[position() mod 5 = 0]")
Java
List<WebElement> elements=driver.findElements(By.xpath("//tbody/tr[position() mod 5 = 0]"));
System.out.println(elements.size());
for (WebElement element : elements) {
System.out.println(element.getText());
}