How to get the full source of a link using selenium - selenium

I'm using selenium RC and want to get all the attributes and all. Something like:
link = sel.get_full_link('//a[#id="specific-link"]')
and the result would of:
print link
would be:
<a id="specific-link" name="links-name" href="url"> text </a>
Is this possible?
thanks

Here's a fancier solution:
sel.get_eval("window.document.getElementByID('ID').innerHTML")
(don't be picky with me on the javascript..)

I think the best way to do this would be to use the getHtmlSource command to get the entire HTML source, and then use either a regular expression or HTML parser to extract the element of interest.
The following Java example will output all links to System.out:
selenium.open("http://www.example.com/");
String htmlSource = selenium.getHtmlSource();
Pattern linkElementPattern = Pattern.compile("<a\\b[^>]*href=\"[^>]*>(.*?)</a>");
Matcher linkElementMatcher = linkElementPattern.matcher(htmlSource);
while (linkElementMatcher.find()) {
System.out.println(linkElementMatcher.group());
}

getAttribute
String href = selenium.getAttribute("xpath=//a[#id="specific-link"]/#href")

I've been trying to do just this, and came up with the following:-
var selenium = Selenium;
string linkText = selenium.GetText("//a[#href='/admin/design-management']");
Assert.AreEqual("Design Management", linkText);

use below code to get all the links on the page:
$str3= "window.document.getElementsByTagName('a')";
$k = $this->selenium->getEval($str3);
$url = explode(",",$k);
$array_size = count($url);
$name=array();
$l=0;
for($i=0;$i<$array_size;$i++)
{
if(!strstr($url[$i], 'javascript'))
{
$name[$l]=$url[$i];
echo "\n".$name[$l];
$l++;
}
}

If the link isn't dynamic, then try this rather cheesy, hacky solution (This is in Python):
selenium.click("//a[text()='Link Text']")<br>
selenium.wait_for_page_to_load(30000)<br>
myurl = selenium.get_location()
Cheesy but it works.
Note: this will not work if the link redirects.

Related

Need Help in programming using selenium

I have a problem is I can not store the value of href on the page
<a target="_blank" href="http://xxx.xx/RLS?mid=-1050286007&guid=53v90152oyA8bDg&lid=26527875" clinkid="26527875"></a>
How can I take the value of href using findElement ?
You should try using getAttribute after finding element as below :-
String href = driver.findElement(By.cssSelector("a[clinkid = '26527875']")).getAttribute("href");
Edited 1:- if clinkid dynamically generated try using by visible link text as below :-
String href = driver.findElement(By.linkText("your link text")).getAttribute("href");
or
String href = driver.findElement(By.partialLinkText("your link text")).getAttribute("href");
or
String href = driver.findElement(By.cssSelector("a[target = '_blank']")).getAttribute("href");
Edited 2 :- if you want query string from url, you should implement java.net.URL to parse it into url and then use getQuery() to get query string as below :-
URL url = new URL(href);
String queryStr = url.getQuery();
Hope it helps..:)

PHPUnit and WebDriver - How to get the current URL?

Scenario:
I open www.google.com, input some keywords and click the search button.
now i get to the result page. I want to get the current url of this result page, including the query parameters.
I found a method getBrowserUrl() here phpunit-selenium on github. Line 410
But this method returned the value which I set in the setUp function.
public function setUp(){
$this->setBrowser(testConfig::$browserName);
$this->setBrowserUrl('http://www.google.com/');
}
public function testGoogleSearch(){
$this->url('');
//input some keywords
.......
//click search button
.......
//want to get the url of result page
$resultUrl= $this->getBrowserUrl();
echo $resultUrl;
}
I got a string 'http://www.google.com/' instead of the whole url of result page.
Please help me,thanks!
The answer is:
$currentURL = $this->url();
I also asked this question here
Thanks to #jaruzafa
From the source code, I'd say it's getCurrentURL()
https://github.com/facebook/php-webdriver/blob/787e71db74e42cdf13a41d500f75ea43da84bc75/lib/WebDriver.php#L43
You can also use
$url=$this->getLocation();
This is how I'll get the current URL
$resultUrl = $this->getSession()->getDriver()->getCurrentUrl();
echo $resultUrl;
or
$resultUrl = $this->getSession()->getCurrentUrl();
echo $resultUrl;
Both are from Behat - Mink
If you are using an older version of PHPUnit Selenium this might be helpful:
$url = $this->getEval('window.location.href;');
$this->assertEquals('EXPECTEDURL', $url);
This is what worked for me
$urlAry = $driver->executeScript('return window.location',array());
$currentURL = $urlAry['href'];

Selenium Xpath Not Matching Items

I am trying to use Selenium's Xpath ability to be able to find an set of elements. I have used FirePath on FireFox to create and test the Xpath that I have come up with and that is working just fine but when I use the Xpath in my c# test with Selenium nothing is returned.
var MiElements = this._driver.FindElements(By.XPath("//div[#class='context-menu-item' and descendant::div[text()='Action Selected Jobs']]"));
and the Html looks like this:-
Can Anyone please point me right as everything that I have read the web says to me that this Xpath is correct.
Thanking you all in-advance.
Please post the actual HTML, so we can simply "drop it in" into a HTML file and try it ourselves but I noticed that there is a trailing space at the end of the class name:
<div title="Actions Selected Jobs." class="context-menu-item " .....
So force XPath to strip the trailing spaces first:
var MiElements = this._driver.FindElements(By.XPath("//div[normalize-space(#class)='context-menu-item' and descendant::div[text()='Action Selected Jobs']]"));
Perhaps you don't take into consideration the time that the elements need to load and you look for them when they aren't yet "searchable". UPDATE I skipped examples regarding this issue. See Slanec's comment.
Anyway, Selenium recommends to avoid searching by xpath whenever it is possible, because of being slower and more "fragile".
You could find your element like this:
//see the method code below
WebElement div = findDivByTitle("Action Selected Jobs");
//example of searching for one (first found) element
if (div != null) {
WebElement myElement = div.findElement(By.className("context-menu-item"));
}
......
//example of searching for all the elements
if (div != null) {
WebElement myElement = div.findElements(By.className("context-menu-item-inner"));
}
//try to wrap the code above in convenient method/s with expressive names
//and separate it from test code
......
WebElement findDivByTitle(final String divTitle) {
List<WebElement> foundDivs = this._driver.findElements(By.tagName("div"));
for (WebElement div : foundDivs) {
if (element.getAttribute("title").equals(divTitle)) {
return element;
}
}
return null;
}
This is approximate code (based on your explanation), you should adapt it better to your purposes. Again, remember to take the load time into account and to separate your utility code from the test code.
Hope it helps.

Alternative to innerHTML for IE

I create HTML documents that include proprietary tags for links that get transformed into standard HTML when they go through our publishing system. For example:
<LinkTag contents="Link Text" answer_id="ID" title="Tooltip"></LinkTag>
When I'm authoring and reviewing these documents, I need to be able to test these links in a browser, before they get published. I wrote the following JavaScript to read the attributes and write them into an <a> tag:
var LinkCount = document.getElementsByTagName('LinkTag').length;
for (i=0; i<LinkCount; i++) {
var LinkText = document.getElementsByTagName('LinkTag')[i].getAttribute('contents');
var articleID = document.getElementsByTagName('LinkTag')[i].getAttribute('answer_id');
var articleTitle = document.getElementsByTagName('LinkTag')[i].getAttribute('title');
document.getElementsByTagName('LinkTag')[i].innerHTML = '' + LinkText + '';
}
This works great in Firefox, but not in IE. I've read about the innerHTML issue with IE and imagine that's the problem here, but I haven't been able to figure a way around it. I thought perhaps jQuery might be the way to go, but I'm not that well versed in it.
This would be a huge productivity boost if I could get this working in IE. Any ideas would be greatly appreciated.
innerHTML only works for things INSIDE of the open/close tags. So for instance if your LinkTag[i] is an <a> element, then putting innerHTML="<a .... > </a> would put that literally between the <a tag=LinkTag> and </a>.
You would need to put that in a DIV. Perhaps use your code to draw from links, then place the corresponding HTML code into a div.
var LinkCount = document.getElementsByTagName('LinkTag').length;
for (i=0; i<LinkCount; i++) {
var LinkText = document.getElementsByTagName('LinkTag')[i].getAttribute('contents');
var articleID = document.getElementsByTagName('LinkTag')[i].getAttribute('answer_id');
var articleTitle = document.getElementsByTagName('LinkTag')[i].getAttribute('title');
document.getElementsById('MyDisplayDiv')[i].innerHTML = '' + LinkText + '';
This should produce your HTML results within a div. You could also simply append the other LinkTag elements to a single DIV to produce a sort of "Preview of Website" within the div.
document.getElementsById('MyDisplayDiv').innerHTML += '' + LinkText + '';
Note the +=. Should append your HTML fabrication to the DIV with ID "MyDisplayDiv". Your list of LinkTag elements would be converted into a list of HTML elements within the div.
DOM functions might be considered more "clean" (and faster) than simply replacing the innerHTML in cases like this. Something like this may work better for you:
// where el is the element you want to wrap with <a link.
newel = document.createElement('a')
el.parentNode.insertBefore(newel,prop);
el = prop.parentNode.removeChild(prop);
newel.appendChild(prop);
newel.setAttribute('href','urlhere');
Similar code worked fine for me in Firebug, it should wrap the element with <a> ... </a>.
I wrote a script I have on my blog which you can find here: http://blog.funelr.com/?p=61 anyways take it a look it automatically fixes ie innerHTML errors in ie8 and ie9 by hijacking ie's innerHTML property which is "read-only" for table elements and replacing it with my own.
I have this xmlObject:
<c:value i:type="b:AccessRights">ReadAccess WriteAccess AppendAccess AppendToAccess CreateAccess DeleteAccess ShareAccess AssignAccess</c:value>
I find value use this: responseXML.getElementsByTagNameNS("http://schemas.datacontract.org/2004/07/System.Collections.Generic",'value')[0].firstChild.data
This one is the best : elm.insertAdjacentHTML( 'beforeend', str );
In case your element size is very small :elm.innerHTML += str;

How to verify links

How to verify whether links are present or not?
eg.
I have 10 links in a page, I want to verify the particular link
Is it possible?
I am using selenium with Java.
Does i can write inside the selenium code
eg
selenium.click("searchimage-size");
selenium.waitForPopUp("dataitem", "3000");
selenium.selectWindow("name=dataitem");
foreach(var link in getMyLinkTextsToTest())
{
var elementToTest = driver.findElement(By.linkText(link));
Assert.IsNotNull(elementToTest);
}
What you can do is find all links on the page like this:
var anchorTags driver.findElement(By.TagName("a"));
and then iterate through the anchorTags collection to make you you've got what you're looking for.
Or if you have a list of the link texts you can do something like this:
foreach(var link in getMyLinkTextsToTest())
{
var elementToTest = driver.findElement(By.linkText(link));
Assert.IsNotNull(elementToTest);
}
This code is all untested and right off the top of my head so you might need to do some slight modification but it should be close to usable.
if you are using Selenium 1.x you can use this code.
String xpath = "//<xpath till your anchor tag>a/#herf";
String href = selenium.getAttribute(xpath);
String expectedLink = "your link";
assertEquals(href,expectedLink);
I hope this may help you...
List<WebElement> links = driver.findElements(By.tagName("a"));
for(WebElement we : links) {
if("Specific link text".equals(we.getText("Specific link text"))) {
we.click();
}
}
I'm taking all links to List variable 'links' and iterating it. Then checking condition, for the specific text we looking in the link is presenting in the list or not. If it found out, it'll click on it
If you're looking to verify each specific for the content of href, you can use javascript to return the outerHTML for a specific Webelement which you can identify however you like; in the example below I use By.cssSelector:
WebElement Element = driver.findElement(By.cssSelector("..."));
String sourceContents = (String)((JavascriptExecutor)driver).executeScript("return arguments[0].outerHTML;", element);
assertEquals(sourceContents, "Learn More");
If you want to make it a tad more elegant you can shave the undesired elements off of the string, but this is the general case as of Selenium-java: 2.53.1 / Selenium-api: 2.47.1 as I can observe.
Best approach would be to use getText() method
List<WebElement> allLinks = driver.findElements(By.tagName("a"));
for(WebElement specificlink : allLinks ) {
if(specificlink.getText().equals("link Text"){
//SOPL("Link found");
break;
}
}