Selenium with Python: can't find element by link text - selenium

Could you help me understand why in this particular case find_element_by_partial_link_text doesn't catch the element.
from selenium import webdriver
import unittest
class RegisterNewUser(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.driver.implicitly_wait(30)
self.driver.get("http://web.archive.org/web/20141117213704/http://demo.magentocommerce.com/")
def test_register_new_user(self):
self.driver.find_element_by_link_text("Log In").click()
Pardon for the strange link. I'm reading a book on Selenium and the link was originally from there. But the contents has changed. The book seems Ok for me. So, I just extracted old web page from an archive.
Well, if I view page source, I can find the link there. But I can't reach it via Selenium.
Could you give me a hint? Thank you in advance.

The link is hidden, you will need to click first on the menu (Account)

Related

How can I get the link to YouTube Channel from Video Page?

I've been trying to get the link to the YouTube channel from the Video Page. However, I couldn't find the element of the link. With Inspector, it is obvious that the link is right here as the following picture.
With the code 'a.yt-simple-endpoint.style-scope.yt-formatted-string', I tried to get the link through the following code.
! pip install selenium
from selenium import webdriver
! pip install beautifulsoup4
from bs4 import BeautifulSoup
driver = webdriver.Chrome('D:\chromedrive\chromedriver.exe')
driver.get('https://www.youtube.com/watch?v=P6Cc2R2jK6s')
soup = BeautifulSoup(driver.page_source, 'lxml')
links = soup.select('a.yt-simple-endpoint.style-scope.yt-formatted-string')
for link in links:
print(link.get_attribute("href"))
However, no matter I used links = soup.select('a.yt-simple-endpoint.style-scope.yt-formatted-string') or links = soup.find('a', class_='yt-simple-endpoint style-scope ytd-video-owner-renderer'), it did not print anything. Someone please help me solve this.
Instead of this:
links = soup.select('a.yt-simple-endpoint.style-scope.yt-formatted-string')
In Selenium if I would do:
links = drvier.find_elements_by_css_selector('a.yt-simple-endpoint.style-scope.yt-formatted-string')

retrieving ad urls using scrapy and selenium

I am trying to retrieve the ad URLs for this website:
http://www.appledaily.com
The ad URLs are loaded using javascript so a standard crawlspider does not work. The ads also changes as you refresh the page.
I found this question here and what I gathered is that we need to first use selenium to load a page in the browser then use Scrapy to retrieve the url. I have some experiences with scrapy but none at all in using Selenium. Can anyone show/point me to resource on how I can write a script to do that?
Thank you very much!
EDIT:
I tried the following but neither works in opening the ad banner. Can anyone help?
from selenium import webdriver driver=webdriver.Firefox()
driver=webdriver.Firefox()
driver.get('http://appledaily.com')
adBannerElement = driver.find_element_by_id('adHeaderTop')
adBannerElement.click()
2nd try:
adBannerElement =driver.find_element_by_css_selector("div[#id='adHeaderTop']")
adBannerElement.click()
CSS Selector should not contain # symbol it should be 'div[id='adHeaderTop']' or a shorter way of representing the same as div#adHeaderTop
Actually on observing and analyzing the site and the event that you are trying to carry out, I find that the noscript tag is what should interest you. Just get the HTML source of this node, parse the href attribute and fire this URL.
It will be equivalent to clicking the banner.
<noscript>
"<a href="http://adclick.g.doubleclick.net/aclk%253Fsa%...</a>"
</noscript>
(This is not the complete node information, just inspect the banner in Chrome and you will find this tag).
EDIT: Here is a working snippet that gives you the URL without clicking on the Ad banner, as mentioned from the tag.
driver = new FirefoxDriver();
driver.navigate().to("http://www.appledaily.com");
WebElement objHidden = driver.findElement(By.cssSelector("div#adHeaderTop_ad_container noscript"));
if(objHidden != null){
String innerHTML = objHidden.getAttribute("innerHTML");
String adURL = innerHTML.split("\"")[1];
System.out.println("** " + adURL); ///URL when you click on the Ad
}
else{
System.out.println("<noscript> element not found...");
}
Though this is written in Java, the page source wont change.

How to click on each marker of google map present on any website using selenium webdriver

How to do the Automation testing of any Google map. I have a map in my project/application, now I want to click on each markers.
Since you have no experience with WebDriver, I'm going to give you the answer you want (and not the one you need which is "Go look at WebDriver manual and tutorials.").
Java example:
// opens up Chrome, but you can use any other browser
WebDriver driver = new ChromeDriver();
// goes to GMaps page and searches for "Washington"
driver.get("https://maps.google.com/maps?q=Washington");
// clicks the only marker on the page
driver.findElement(By.cssSelector("img[src*='markerTransparent.png']")).click();
// don't forget to kill the browser or else you'll have neverending chromedriver.exe processes
driver.quit();
Now, you need to take a step back, look at WebDriver, choose a language in which you want to write your tests, go through the API and some examples, then try to implement your tests and if something goes astray, feel free to post another question with a particular issue (just make sure to search for it first).
You can click on each marker by locating that marker using ID.
Here is one script which I wrote to click on marker of google maps
d = Selenium::WebDriver.for :firefox
d.get 'http://maps.google.com'
d.find_element(:id, 'gbqfq').click
d.find_element(:id, 'gbqfq').send_keys 'hdfc bank pune'
d.find_element(:id, 'gbqfb').click
d.find_element(:id, 'mtgt_J.1000').click

Issue getting context menu in adobe CQ5 with selenium webdriver

I am trying to automate Adobe CQ5 with Selenium webdriver.
I am finding it difficult to right click on content pages on right hand pane.
If anyone succeeded working with context menu on right hand pane/content pages, please guide me with approach which worked.
Let me provide you more details of the issue I am facing:
I have article with the name 'MyArticle' and I am trying to right click and open it. When I am using below piece of code I am not getting context menu itself so that I can work on it.
Actions action = new Actions(myD);
WebElement wb =myD.findElement(By.xpath("//table/tbody/tr/td/div[contains(text(),'MyArticle')]"));
Action rightClick = action.contextClick(wb).sendKeys(Keys.ARROW_DOWN).sendKeys(Keys.RETURN).build();
rightClick.perform();
I tried different ways but getting error while locating the element, any clue would be really helpful.
Thanks,
Pankaj
I am not sure if I understand your question. I am assuming you are in WCM, trying to open a page for editing.
The code below works for me in CQ 5.6, Selenium 2.25 in both IE and FF.
WebElement tableRow = driver.findElement(By.id("cq-siteadmin-grid"))
.findElement(By.xpath(".//div[text()='YOUR_PAGE_TITLE_HERE']"));
new Actions(driver).contextClick(tableRow).perform();
WebElement menu = driver.findElement(By.xpath("//div[contains(#class, 'x-menu') and contains(#style, 'visible')]"));
menu.findElement(By.xpath(".//span[text()='Open']")).click();
Let me know if it helped.

Can beautiful soup also hit webpage events?

Beautiful Soup is a Python library for pulling data out of HTML and XML files. I will use it to extract webpage data,but i didn't find out any way to click the buttons,anchor label which are used in my case the page navigation. So for this shall I have to use any other or beautiful soup has the capability i didn't aware of.
Please advice me!
To answer your tags/comment, yes, you can use them together (Selenium and BeautifulSoup), and no, you can't directly use BeautifulSoup to execute events (clicking etc.). Although I myself haven't ever used them together in the same situation, a hypothetical situation could involve using Selenium to navigate to a target page via a certain path (i.e. click() these options and then click() the button to the next page), and then using BeautifulSoup to read the driver.page_source (where driver is the Selenium driver you created to 'drive' the browser). Since driver.page_source is the HTML of the page, you can use BeautifulSoup as you are used to, parsing out whatever information you need.
Simple example:
from bs4 import BeautifulSoup
from selenium import webdriver
# Create your driver
driver = webdriver.Firefox()
# Get a page
driver.get('http://news.ycombinator.com')
# Feed the source to BeautifulSoup
soup = BeautifulSoup(driver.page_source)
print soup.title # <title>Hacker News</title>
The main idea is that anytime you need to read the source of a page, you can pass driver.page_source to BeautifulSoup in order to read whatever you want.