Summary: When I try to go to the third page in a web site I'm trying to
screen scrape using Selenium and Chrome I can't find any elements on the third page.
I see Html for 2nd page in Driver.PageSource.
Steps:
Navigate.GoToUrl("LoginPage.aspx")
Find username and password elements.
SendKeys to Username and Password.
Find and click on Submit\Login button.
Web Site displays main menu page with two Link style menu items.
Find desired menu item using FindElement(By.LinkTest("New Person System")).
Click on link menu item. This should get me to the "Person Search" page (the 3rd page).
Try to wait using WebDriverWait for element on "Person Search" page. This fails to find element on new page after 5-10 seconds.
Instead of using WebDriverWait I then simply wait 5 or 10 seconds for page to load using Thread.sleep(5000). (I realize WebDriverWait is the better design option.
Try to find link with text "Person".
Selenium fails to find link tag.
I see desired "Person Search" page displayed in Chrome.
I see last page Html in ChromeDriver.PageSource.
Chrome geckodriver.exe. Build 7/21/2017.
NUnit 3.7.1
Selenium 3.4
VB
I used IE for another project with similar environment. Didn't have a problem getting to any page (once I coded Selenium correctly).
The web site I'm trying to screen scrape only supports recent IE version. I'm testing a legacy app for another project that requires IE 8. So using IE is out of the question.
Maybe I should try Firefox...
Related
In this page :
https://www.bedbathandbeyond.com/store/product/o-o-by-olivia-oliver-turkish-modal-bath-towel-collection/5469128?categoryId=13434
I can see a button with "Add to Cart" text , I can also see it in dev tools.
But when the same page source is retrieved by ChromeHeadless using selenium, and my script searches for it, this text is not present.
I tried with selecting show page source in the browser, the source too did not have the "Add To Cart text"
Further I used a curl to GET page, "Add To Cart" wasn't in the returned page source either.
What am I doing wrong?
is the page hiding the button?
How can I check for its presence, for product availability check?
The elements you are looking for are inside the shadow DOM. You need to access the shadow root first. Hard to see exactly what is going on in the DOM without some trial and error, but something like this:
WebElement shadowHost = driver.findElement(By.cssSelector("#wmHostPdp"));
SearchContext shadowRoot = shadowHost.getShadowRoot();
WebElement addToCart = shadowRoot.findElement(By.cssSelector(".shipItBtnCont button"));
More info on Shadow DOM & Selenium — https://titusfortner.com/2021/11/22/shadow-dom-selenium.html
How can I get page source of current page?
I make driver.get(link) and I am on main page. Then I use selenium to get other page (by tag and xpath) and when I get good page I'd like to obtain its page source.
I tried driver.page_source() but I obtain page source of main page not this current.
driver = webdriver.Chrome(ccc)
driver.get('https://aaa.com')
check1 = driver.find_element_by_xpath('/html/body/div[1]/div/div[2]/button')
check1.click()
time.sleep(1)
check2=driver.find_element_by_xpath('/html/body/div[1]/div[2]/div[2]/div[1]/div/a')
check2.click()
And after check2.click() I am on page with new link (this link only works by click not directly). How can I get page source for this new link?
I need it to change selenium for Beautiful Soup
I have used Webdriver and displaying sources of page
Upon typing xpath in fire-path text field, if x-path is correct then it'll display the corresponding HTML code. It was working fine previously.
But now it's not displaying the corresponding HTML code even though the xpath is correct.
Can anyone help me to find the solution for this problem? I even uninstalled fire-path and installed again but still, it's not working.
If you visit the GitHub Page of FirePath, it clearly mentions:
FirePath is a Firebug extension that adds a development tool to edit, inspect and generate XPath expressions and CSS3 Selectors
Now if you visit the home page of FireBug, it clearly mentions that :
The Firebug extension isn't being developed or maintained any longer. We invite you to use the Firefox DevTools instead, which ship with Firebug.next
So the direction is clear that we have to use DevTools [F12] which comes integrated with the Mozilla Firefox 56.x + releases onwards.
Example Usage :
Now, let us assume we have to identify the xpath of the Search Box on Google Home Page.
Open Mozilla Firefox 56.x browser and browse to the url https://www.google.co.in
Press F12 to open the DevTools
Within the DevTools section, on the Inspector tab, use the Inspector to identify the Search Box WebElement.
Copy the xpath (absolute) and paste it in a text pad.
Construct a logical unique xpath.
Within the DevTools section, on the Console tab, within JS sub menu, paste the logical unique xpath you have constructed in the following format and hit Enter or Return as follows:
$x("logical_unique_xpath_of_search_box")
The WebElement identified by the xpath will be reflected.
The new version of Firefox is not supporting firebug.
You can use chrome dev tools if you like so.
I personally writing XPath using chrome dev tools
For more info refer my answer here
Is there a way to get the xpath in google chrome?
Is it possible to use Selenium so that my code and browser will be integrated - I want to get updated HTML page every time I made any change on the web page in the browser?
In other words I would like to run my app which would automatically start a browser and every time I do any change on the web page selenium automatically get changed HTML in java/python code. For example select a dropdown item might be a good example.
Thanks!
I have rspec tests using Capybara which work great locally and on browserstack with a configuration of OS X Mavericks/Chrome 33 on browserstack.
When I change the configuration to Windows 7 / IE 10 I'm getting an ElementNotVisibleError on the last line of code represented here:
find('#myIdToExpandMyList').click
#click selected one
find(:xpath, "//SomeXPATHToRepresentAValueInMyList", :visible => :all).click
What is happening (I can see due to screenshots) is that the first line of code is not working. For some reason the click on this element is not working.
Here is an image of the expand (+)
When the user clicks on the plus sign the items in the list appear. Since the click isn't working the items never appear and the last line of code above doesn't work. Why doesn't this find/click work in IE 10 (with Selenium Webdriver)?
Here is the html code behind the expand:
<a id="myIdToExpandMyList" href="javascript:SomeJavscriptCallToExpandWithValues(params)">
<img src="plussign.png" alt="Expand">
</a>
UPDATE: In looking at this further this appears to be related to modal dialogs. In my case I have a modal dialog opening (z-index is set and the rest of the page is not reachable). For some reason (only in IE) I can't click on a link on the modal dialog using a capybara find(element).click. It seems to find the element otherwise I believe I would get an error on that.
Second UPDATE: After trying all sorts of things (falling back to selenium, different IE versions, native clicks, nothing worked. The only thing that worked was executing the javascript via execute_script. The plus sign (href) triggers a javascript function which opens the list - I called it directly. I do not like this solution so hopefully someone has a better one.
I am replying on behalf of BrowserStack.
I understand for your tests on IE 10, the logs show that the expand(+) button was clicked successfully. However, the click did not initiate the action (expand menu) it was supposed to. Thus, the subsequent actions failed.
As you have mentioned, you are able to run tests locally on your machine. Could you drop us an email with following details:
IEDriver version you use locally
Exact version of the IE browser you test on
Selenium Jar version