How do you use selenium in Django to choose and select an option in a <select> tag of a form?
This is how far I got:
def setUp(self):
self.browser = webdriver.Firefox()
def tearDown(self):
self.browser.quit()
def test_project_info_form(self):
# set url
self.browser.get(self.live_server_url + '/tool/project_info/')
# get module select
my_select = self.browser.find_element_by_name('my_select')
#! select an option, say the first option !#
...
So this post was very useful:
https://sqa.stackexchange.com/questions/1355/what-is-the-correct-way-to-select-an-option-using-seleniums-python-webdriver
Basically I had to target the <select> and <option> by xpath directly, followed by a click event:
self.browser.find_element_by_xpath(
"//select[#id='my_select_id']/option[text()='my_option_text']"
).click()
Or I could have targeted the by index:
self.browser.find_element_by_xpath(
"//select[#id='my_select_id']/option[2]"
).click()
I hope this is helpful to someone with a similar problem.
Related
Based on the Selenium documentation the find element by css selector syntax is element = driver.find_element_by_css_selector('#foo') but the example shows there is a.nav before the # sign ‘(a.nav#home)’ which based on this website is HTML tag.
In another part of the Selenium documentation the css_selector even doesn't have the # sign: ele = driver.find_element(By.CSS_SELECTOR, 'h1')
Questions:
Which syntax is correct? with or without the HTML tag? with or without the # sign?
In Visual Studio Code I used these syntaxes to find the search boxes or sign-in boxes. It worked in this website but didn't work in this website. Could you help me find the search box using css_selector in this website?
Here is an example of my scripts:
from selenium import webdriver
from selenium.webdriver.common.by import By
try:
driver = webdriver.Chrome()
driver.get("https://www.arizonarealestate.com")
searchBox = driver.find_element(By.CSS_SELECTOR, "#input[placeholder='Enter city, address, neighborhood, zip, or MLS #']")
searchBox = driver.find_element(By.CSS_SELECTOR, "input#input[placeholder='Enter city, address, neighborhood, zip, or MLS #']")
searchBox.send_keys("Some text")
searchBtn = driver.find_element(By.CSS_SELECTOR, "button#.btn.btn-primary.btn-lg.btn-block.js-qs-btn").click()
finally:
#print("============ Done!")
driver.quit()
Generally speaking a css selector is just a string with some specific syntax and is not really defined by the selenium WebDriver itself.
You should have a look at the MDN description of css selectors.
In your question you specifically seem to have question on where to specify the id selector specified with the # character. This selector should actually only be used just by itself as all id's in a page should be unique and therefore no other information is needed.
In your example, the css selector #input[placeholder=...'] selector would select an element with an id equal to input.
If you intended selecting an input tag with a specific placeholder you should omit the #.
I'm trying to print search results of DuckDuckgo using a headless WebDriver and Selenium. However, I cannot locate the DOM elements referring to the search results no matter what ID or class name I search for and no matter how long I wait for it to load.
Here's the code:
opts = Options()
opts.headless = False
browser = Firefox(options=opts)
browser.get('https://duckduckgo.com')
search = browser.find_element_by_id('search_form_input_homepage')
search.send_keys("testing")
search.submit()
# wait for URL to change with 15 seconds timeout
WebDriverWait(browser, 15).until(EC.url_changes(browser.current_url))
print(browser.current_url)
results = WebDriverWait(browser,10)
.until(EC.presence_of_element_located((By.ID,"links")))
time.sleep(10)
results = browser.find_elements_by_class_name('result results_links_deep highlight_d result--url-above-snippet') # I tried many other ID's and class names
print(results) # prints []
I'm starting to suspect there is some trickery to avoid web scraping in DuckDuckGo. Does anyone has a clue?
I've changed to use cssSelector then it works.I use java, not python.
List<WebElement> elements = driver.findElements(
By.cssSelector(".result.results_links_deep.highlight_d.result--url-above-snippet"));
System.out.println(elements.size());
//10
I am working with Behave framework in Python, which i have not used before, and I am not sure how I can click on an element_by_id. There is a cookie popup that I need to get around before I can send login keys.
This is my .features file:
Feature:
Login Functionality
Scenario: I can login
When visit url "https://example.com"
When I click on the button "accept-cookie-notification"
When field with name "user_email_login" is given "#gmail.com"
When field with name "user_password" is given "password"
Then title becomes "Dashboard"
Here is my .py file:
Steps
#when('visit url "{url}"')
def step(context, url):
context.browser.get(url)
time.sleep(5)
#when('I click on the button "{selector}"')
def step(context, selector,):
elem = context.driver.find_element_by_id("selector")
elem.submit()
time.sleep(5)
#when('field with name "{selector}" is given "{value}"')
def step(context, selector, value):
elem = context.browser.find_element_by_id(selector)
elem.send_keys(value)
elem.submit()
time.sleep(5)
#then('title becomes "{title}"')
def step(context, title):
assert context.browser.title == title
Also I will need to do element_by_css and xpath later on.
Thank you in advance for any help.
Working with Selenium is very simple if you are using Behave. Just install behave-webdriver package https://pypi.org/project/behave-webdriver/ It's a step library with a wide set of already defined steps with given-when-then decorators like: I click on the element "{element}" and you can use both id,css and XPath as {element} parameter. You do not need to implement anything, just use predefined steps in your scenarios. Here are my code examples:
Scenario: A user can log in.
Given the base url is "http://my_site:3000"
And I open the site "/#/login"
And the element "#email" is visible
When I add "my#email.com" to the inputfield "#email"
And I add "1234567" to the inputfield "#password"
And I click on the button "#loginButton"
Then I wait on element "//nav-toolbar//span[contains(text(),'Myself')]" to be visible
And I expect that the attribute "label" from element "#loggedUser" is "Finally, we made it!"
And please do not forget to add handling context.behave_drive in your environment.py methods before_all() and after_all() as described in the HOWTO page above.
What you need is read some documentation about, try to take a look on something like taht where you have the most common commands
http://allselenium.info/python-selenium-commands-cheat-sheet-frequently-used/
So i need to scrap a page like this for example and i am using Scrapy + Seleninum to interact with the datepicker calendar but i am running into a ElementNotVisibleException: Message: Element is not currently visible and so may not be interacted with.
So far i have:
def parse(self, response):
self.driver.get("https://www.airbnb.pt/rooms/9315238")
try:
element = WebDriverWait(self.driver, 10).until(
EC.presence_of_element_located((By.XPATH, "//input[#name='checkin']"))
)
finally:
x = self.driver.find_element_by_xpath("//input[#name='checkin']").click()
import ipdb;ipdb.set_trace()
self.driver.quit()
I saw some references on how to achieve this https://stackoverflow.com/a/25748322/977622 and https://stackoverflow.com/a/19009256/977622 .
I appreciate if someone could help me out with my issue or even provide a better example on how i can interact the this datepicker calendar.
There are two elements with name="checkin" - the first one that you actually find is invisible. You need to make your locator more specific to match the desired input. I would also use the visibility_of_element_located condition instead:
element = WebDriverWait(self.driver, 10).until(
EC.visibility_of_element_located((By.CSS_SELECTOR, ".book-it-panel input[name=checkin]"))
)
I am new to selenium. I want to automate the select option present on my webpage. I am trying to use select with xpath. Is it possible to get the innerHTML without having id but only with xpath?
If yes how ? If no then how to solve the issue with select.
Yes, it is possible. Start here: http://www.w3schools.com/xpath/.
And here's a quick dropdown example in python:
from selenium.webdriver import Chrome
SETTINGS_PAGE_URL = 'chrome://settings/browser'
SEARCH_ENGINE_DROPDOWN_ID = 'defaultSearchEngine'
SEARCH_ENGINE_CHOICE_XPATH = '//option[text()="Google"]'
browser = Chrome()
browser.get(SETTINGS_PAGE_URL)
dropdown = browser.find_element_by_id(SEARCH_ENGINE_DROPDOWN_ID)
option = dropdown.find_element_by_xpath(SEARCH_ENGINE_CHOICE_XPATH)
option.click()
Anyways - without the HTML code of the page, I can give you only general advice about XPath. See this page: http://zvon.org/xxl/XPathTutorial/Output/example1.html
It helped me a lot understanding the XPath approach