I have reviewed several questions pertaining to this popular topic but have not yet found a solution. I am trying to scrape a dynamic webpage that requires the user to click something and then enter some input. The site I am trying to scrape is here: https://a810-dobnow.nyc.gov/publish/#!/
I am trying to click where it says "Building Identification Number" and proceed to enter some input. I cannot seem to even locate the element I need to click. I used a wait and also checked to see if it was located in some other frame I needed to switch to, it is not as far as I can see:
driver = webdriver.Chrome("C:\\Users\\#####\\Downloads\\chromedriver_win32\\chromedriver.exe")
driver.get("https://a810-dobnow.nyc.gov/publish/#!/")
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, "//*[#id=""accordiongroup-9-9180-tab""]/h4/a/span/h3")))
driver.find_element_by_xpath("//*[#id=""accordiongroup-9-9180-tab""]/h4/a/span/h3").click()
I just loaded the page, and when i try to search the dom for the xpath you have provided it fails to find the matching element.
I'd recommend using something like:
driver.find_element_by_xpath("//h3[contains(text(), 'Building Identification Number (BIN)')]").click()
Hope this helps
Related
I was working to scrape links to articles on a website. But normally when site was loaded it list only 5 articles then it requires to click load more button to display more articles list.
Html source has only links to first five articles.
I used selenium python to automate clicking load more button to completely load webpage with all article listings.
Question is now how can i extract links to all those articles.
After loading site completely with selenium i tried to get html source with driver.page_source and printed it but still it has only link to first 5 articles.
I want to get links to all those articles that were loaded in webpage after clicking load more button.
Please someone help to provide solution.
Maybe the links take some time to show up and your code is doing driver.source_code before the source code is updated. You can select the links with Selenium after an explicit wait so that you can make sure that the links that are dinamically added to the web page are fully loaded. It is difficult to boil down exactly what you need without a link to your source, but (in Python) it should be something similar to:
from selenium.webdriver.support.ui import WebDriverWait
def condition(driver):
"""If the selector defined in the function retrieves 10 or more results, return the results.
Else, return None.
"""
selector = 'a.my_class' # Selects all <a> tags with the class "my_class"
els = driver.find_elements_by_css_selector(selector)
if len(els) >= 10:
return els
# Making an assignment only when the condition returns a truthy value when called (waiting until 2 min):
links_elements = WebDriverWait(driver, timeout=120).until(condition)
# Getting the href attribute of the links
links_href = [link.get_attribute('href') for link in links_elements]
In this code, you are:
Constantly looking for the elements you want until there are 10 or more of them. You can do this by CSS Selector (as in the example), XPath or other method. This gives you a list of Selenium objects as soon as the wait condition returns an object with a True value, until a certain timeout. See more on explicit waits in the documentation. You should make the appropriate condition for your case - maybe expecting a certain number of links is not good if you are not sure of how many links there will be in the end.
Extracting what you want from the Selenium object. For that, use the appropriate method over the elements in the list you got from the step above.
Can you help me to identify element ID or any other locator of timeline composer in Facebook profile ?
I need this to use in Robot framework with selenium2library to post something on my wall.
I can log in to Facebook, navigate to profile, but I cant input text into timeline composer. I tried to use Click element before inserting text, but no success.
I am using "inspect element" in browsers/firebug add-on to identify elements.
In this case, unfortunately all locators I have tried giving errors like:
Element does not appear in 5 seconds
or
Element must be user editable in order to clear it
Non dynamic locator for FB timeline-composer has name "xhpc_message_text" (18.10.2016)
Input text name=xhpc_message_text test
I can find broken inks/ images in any particular webpage. But I am not able to find it throughout all the pages using Selenium. I have gone through many blogs but didn't find any code working. It would be great help if anyone one of you could help me to fix this problem
Collect all the href attribute in your page using the 'a' and 'img' tagname in a list.
In java, iterate the loop,setup a HttpURLCOnnection for each url from the href list. Connect to it and check the response code. Google for logic and error codes responses.
If you want to check broken images for all the pages, you can use HTTPClient library to check status codes of the images on a page.
First try to find all images on each page by using Webdriver.
Below is the syntax:
List<WebElement> imagesList = driver.findElements(By.tagName("img"));
Below is the syntax to get links
List<WebElement> anchorTagsList = driver.findElements(By.tagName("a"));
Now you need to iterate through each image and verify response code with HttpStatus.
You can find example from here Find Broken / Invalid Images on a Page
You can find example from here Find Broken Links on a Page
I have login form on website,
i want to make automatic testcase for this scenario:
i give phone number but no password:
How do I test selenium to give me expected red error message popup?
What function is in selenium api to check for it? I did not find it!
EDIT: or maybe my expected thing to happen should be "stay on same page"?
There is no specific function like that. Find out what is the xpath for that red colored text. Then you can create a web element using that xpath and check whether it is visible and what is the text value of that element, and assert on that.
As an example:
You first define the error message WebElement as messageElement
You may locate the element like this:
driver.findElement(By.id("error-message"))
Then you may get the error message text using this:
messageElement.getText());
Once you get the text you can then validate it with the expected text.
Assert.assertEquals(actualMessage, expectedMessage);
PS:
From snap, it doesn't seem to be another pop-up so no need to use switch to alert etc.
I'm testing (using Selenium) a site containing a slickgrid.
To find the correct field to enter a value, I have to apply a filter, and then double click the field to enter the data.
The problem is, that after applying the filter nine out of ten times Selenium ends up with an exception that the element is no longer attached to the DOM, or is not present in the cache anymore. One out of ten doesn't fail on this point.
I've tried about every bit of advice I can find on this issue, but none has brought any sufficient help. Waiting an looping until the element is present, visible etc. doesn't work.
So: is there a way to have Selenium locate an element in a slickgrid after the page has changed because of a filter action?
Thanks!