Upon typing xpath in fire-path text field, if x-path is correct then it'll display the corresponding HTML code. It was working fine previously.
But now it's not displaying the corresponding HTML code even though the xpath is correct.
Can anyone help me to find the solution for this problem? I even uninstalled fire-path and installed again but still, it's not working.
If you visit the GitHub Page of FirePath, it clearly mentions:
FirePath is a Firebug extension that adds a development tool to edit, inspect and generate XPath expressions and CSS3 Selectors
Now if you visit the home page of FireBug, it clearly mentions that :
The Firebug extension isn't being developed or maintained any longer. We invite you to use the Firefox DevTools instead, which ship with Firebug.next
So the direction is clear that we have to use DevTools [F12] which comes integrated with the Mozilla Firefox 56.x + releases onwards.
Example Usage :
Now, let us assume we have to identify the xpath of the Search Box on Google Home Page.
Open Mozilla Firefox 56.x browser and browse to the url https://www.google.co.in
Press F12 to open the DevTools
Within the DevTools section, on the Inspector tab, use the Inspector to identify the Search Box WebElement.
Copy the xpath (absolute) and paste it in a text pad.
Construct a logical unique xpath.
Within the DevTools section, on the Console tab, within JS sub menu, paste the logical unique xpath you have constructed in the following format and hit Enter or Return as follows:
$x("logical_unique_xpath_of_search_box")
The WebElement identified by the xpath will be reflected.
The new version of Firefox is not supporting firebug.
You can use chrome dev tools if you like so.
I personally writing XPath using chrome dev tools
For more info refer my answer here
Is there a way to get the xpath in google chrome?
Related
I have reviewed several questions pertaining to this popular topic but have not yet found a solution. I am trying to scrape a dynamic webpage that requires the user to click something and then enter some input. The site I am trying to scrape is here: https://a810-dobnow.nyc.gov/publish/#!/
I am trying to click where it says "Building Identification Number" and proceed to enter some input. I cannot seem to even locate the element I need to click. I used a wait and also checked to see if it was located in some other frame I needed to switch to, it is not as far as I can see:
driver = webdriver.Chrome("C:\\Users\\#####\\Downloads\\chromedriver_win32\\chromedriver.exe")
driver.get("https://a810-dobnow.nyc.gov/publish/#!/")
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, "//*[#id=""accordiongroup-9-9180-tab""]/h4/a/span/h3")))
driver.find_element_by_xpath("//*[#id=""accordiongroup-9-9180-tab""]/h4/a/span/h3").click()
I just loaded the page, and when i try to search the dom for the xpath you have provided it fails to find the matching element.
I'd recommend using something like:
driver.find_element_by_xpath("//h3[contains(text(), 'Building Identification Number (BIN)')]").click()
Hope this helps
I am trying to autoamte the eBay mobile native app using Selenium Appium.I am able to retrieve the page elements of all the pages however for the Sign In page I am not. I am getting error message in both UIAutomator and Appium Inspector 1.6.5 keeps searching with no response. I would like to know if the issue is with the eBay page or is there any other alternative ways to find the locators.
Steps to reproduce:
1. Search any object in ebay -> go to details page -> Click on watch
CaptureImage Issue
This issue occur if your appium server is on OR APP in mobile or emulator, now again open the app and navigate to screen you wanted. try this time
OR
stop your server and close this uiautomatorviwer and again open uiautomatorviwer. Your will not get the error this time.
another way to locate element is:-
Use below code :-
System.out.println("source : "+ driver.getPageSource());
driver.getPageSource() will return you the XML of screen.
Now you can construct an xpath on your element.
Use below link to beautify your xml
http://xmlbeautifier.com/
Use below code to validate your xpath
https://www.freeformatter.com/xpath-tester.html
Hope it will help you :)
Summary: When I try to go to the third page in a web site I'm trying to
screen scrape using Selenium and Chrome I can't find any elements on the third page.
I see Html for 2nd page in Driver.PageSource.
Steps:
Navigate.GoToUrl("LoginPage.aspx")
Find username and password elements.
SendKeys to Username and Password.
Find and click on Submit\Login button.
Web Site displays main menu page with two Link style menu items.
Find desired menu item using FindElement(By.LinkTest("New Person System")).
Click on link menu item. This should get me to the "Person Search" page (the 3rd page).
Try to wait using WebDriverWait for element on "Person Search" page. This fails to find element on new page after 5-10 seconds.
Instead of using WebDriverWait I then simply wait 5 or 10 seconds for page to load using Thread.sleep(5000). (I realize WebDriverWait is the better design option.
Try to find link with text "Person".
Selenium fails to find link tag.
I see desired "Person Search" page displayed in Chrome.
I see last page Html in ChromeDriver.PageSource.
Chrome geckodriver.exe. Build 7/21/2017.
NUnit 3.7.1
Selenium 3.4
VB
I used IE for another project with similar environment. Didn't have a problem getting to any page (once I coded Selenium correctly).
The web site I'm trying to screen scrape only supports recent IE version. I'm testing a legacy app for another project that requires IE 8. So using IE is out of the question.
Maybe I should try Firefox...
I have rspec tests using Capybara which work great locally and on browserstack with a configuration of OS X Mavericks/Chrome 33 on browserstack.
When I change the configuration to Windows 7 / IE 10 I'm getting an ElementNotVisibleError on the last line of code represented here:
find('#myIdToExpandMyList').click
#click selected one
find(:xpath, "//SomeXPATHToRepresentAValueInMyList", :visible => :all).click
What is happening (I can see due to screenshots) is that the first line of code is not working. For some reason the click on this element is not working.
Here is an image of the expand (+)
When the user clicks on the plus sign the items in the list appear. Since the click isn't working the items never appear and the last line of code above doesn't work. Why doesn't this find/click work in IE 10 (with Selenium Webdriver)?
Here is the html code behind the expand:
<a id="myIdToExpandMyList" href="javascript:SomeJavscriptCallToExpandWithValues(params)">
<img src="plussign.png" alt="Expand">
</a>
UPDATE: In looking at this further this appears to be related to modal dialogs. In my case I have a modal dialog opening (z-index is set and the rest of the page is not reachable). For some reason (only in IE) I can't click on a link on the modal dialog using a capybara find(element).click. It seems to find the element otherwise I believe I would get an error on that.
Second UPDATE: After trying all sorts of things (falling back to selenium, different IE versions, native clicks, nothing worked. The only thing that worked was executing the javascript via execute_script. The plus sign (href) triggers a javascript function which opens the list - I called it directly. I do not like this solution so hopefully someone has a better one.
I am replying on behalf of BrowserStack.
I understand for your tests on IE 10, the logs show that the expand(+) button was clicked successfully. However, the click did not initiate the action (expand menu) it was supposed to. Thus, the subsequent actions failed.
As you have mentioned, you are able to run tests locally on your machine. Could you drop us an email with following details:
IEDriver version you use locally
Exact version of the IE browser you test on
Selenium Jar version
I tried the following XPATH in XPATHHelper in Chrome and XPather in Firefox and it always displays all the snippets(ie the description of the search results) in google search result page, but it does not work in the Scrapy shell:
//span[#class='st']
In case it matters, I invoke scrapy shell like this:
scrapy shell "http://www.google.com/search?q=myQuery"
and I say hxs.select("//span[#class='st']"). This always returns an empty list.
Any clues as to why this could be happening?
Scrapy is not able to "parse" sites that need Javascript execution. What different developer consoles show you is the already interpreted and executed site with all Javascripts applied.
Since Google displays its resulst with the help of Javascript, the Scrapy on its own can't handle this.
sometimes sites will not work with Javascript Disabled (Applebees.com for example) so you have to use an actual browser like Selenium.
In Firefox url bar type :
about:config
find the line javascript.enable and change its value to false
Install FireFinder extension
Open Firebug (F12)
and then enjoy scraping google like xpath expression :
//*[#id="search"]//li[#class="g"]/div[#class="s"]//cite