Alfresco + Selenium WebDriver. Wait for AJAX - selenium

I'm developing tests with TestNG and Selenium WebDriver for some custom Alfresco modules being developed by our team.
After the page loading I need to wait for all AJAX requests to get necessary WebElement.
I found this approach. I've figured out that Alfresco uses Dojo.
So i wrote a following method (timeout is yet to be added):
void waitForAJAX() {
if(javascriptExecutor == null) {
throw new UnsupportedOperationException(webDriverType.toString() + " does not support javascript execution.");
}
boolean presentActiveConnections = true;
Integer numOfCon;
// Minor optimization. Getting rid of the repeaedly invocation of valueOf() in while.
Integer zeroInteger = Integer.valueOf(0);
while(presentActiveConnections) {
numOfCon = (Integer)javascriptExecutor.executeScript("return selenium.browserbot.getCurrentWindow().dojo.io.XMLHTTPTransport.inFlight.length");
if( numOfCon.equals(zeroInteger) ) {
presentActiveConnections = false;
}
}
}
But when i run my tests i get following error on invocation of this method from the test:
Failed: ReferenceError: dojo is not defined.
Should dojo variable be available from any javascript? I was unable to locate it too when i manually checked the page source.
Thanks in advance.

Related

Does TestCafe support sections in page objects (like Nightwatch does) ? If not, how can we achieve this in TestCafe?

We have rich UI with lot of web elements & multi levels of elements on UI segregated into sections. As part of evaluation for a new test automation framework, I am looking to see if we can use TESTCAFE.
Currently we are using Nightwatch (Java Script) framework which support Sections in Page Objects. Now we are moving to TestCafe (Java Script) framework. Could anyone give me an example on how we can maintain SECTIONS in PageObjects using TestCafe ?? If TestCafe doesn't support, how do we achieve same in TestCafe.
NighWatch Page Object with Sections Example:
Multiple level of sections in page_objects in nightwatch.js
In TestCafe page model is just a JavaScript class (https://devexpress.github.io/testcafe/documentation/guides/concepts/page-model.html#page-model-example), so you can create several nested classes that will reflect your page structure.
For example, if there are a title, toolbar and list objects on the page, you can use the following code for your page model:
class ListModel {
getItem(i) {
//find an item
}
constructor (selector) {
this.selector = selector;
}
}
class ToolbarModel {
action(){
//perform an action
}
constructor (selector) {
this.selector = selector;
}
}
class PageModel {
constructor (selector) {
this.selector = selector;
this.title = selector.find('#tile');
this.toolBar = new ToolbarModel(selector.find('#toolbar'));
this.list = new ListModel(selector.find('#list'));
}
}

Reading parameters from TestNG file

I successfully implemented several tests within TestNG framework, where parameters are being read from xml file.
Here is the example block that is executed as first:
#Parameters({ "country" })
#BeforeSuite(alwaysRun = true)
public void prepareRequest(String country, ITestContext cnt) {
LoginInfoRequestParm loginParms = new LoginInfoRequestParm(country);
Headers reqHeaders = new Headers();
reqHeaders.setHeaders(loginParms);
}
The problem/question is, why does it work only if the ITestContext is specified? Once it is removed, the overall suite is broken and it will never come to the specified method prepareRequest(). I was not able to debug it, because I cannt set breakpoint before the method to be able to see what is going on in TestNG itself.
Thank you for your explanation.
To get out of this situation, try something like this
String myPar = context.getCurrentXmlTest().getParameter("country");
if (myPar == null) {
myPar = "INDIA";
}
now myPar can be used, only thing here is if you run class for debug or any other purpose then we are using INDIA. if we run from testng.xml file then it will take values from that file.

Selenium Page Object pattern error pages handling

I've got a generic question concerning error pages.
Imagine a simple use case, good (1) and bad (2) authentication.
In case (1), we've got the index page.
In case (2), we've got a specific error page.
The point is, I've got a page object LoginPage, and the submitLoginForm should return the next page. I click on it with a bad login form filled in.
Then, we've got 2 options for handling it:
- should we create a LoginErrorPage and give LoginPage a submitNonValidLoginForm returning this LoginErrorPage ?
- should we useLoginPage with submitLoginForm returning the 'right' navigation page IndexPage, and in the Junit test, assert on the driver real state (hasn't got IndexPage elements but some others).
I hope I'm clear !
Thank you
From my personal experience I can say it tends to be better to have different Page Objects for (conceptually) different pages, even when we're talking about the same URL with different content.
So I suggest following your first option, creating a LoginError Page Object. Another thing is that the page validation should be done in your Page Object, not as a test because your creating a dependency between the test and Selenium directly.
I.E (in a very pseudocodish way)
class BasePage {
constructor (driver, context, isLoaded = false) {
this->webDriver = driver
//clicking links or submitting forms from other page objects
//will trigger the page load at driver level so we don't want to trigger a page reload
if (isLoaded) {
this->loadPage()
}
this->validatePage()
}
loadPage() {
this->webDriver->get(this->getPageUrl)
}
abstract validatePage()
abstract getPageUrl()
}
class LoginPage extends BasePage{
validatePage() {
this->elementUsername = this->webDriver->findElement(WebDriverBy::id('username'))
this->elementPassword = this->webDriver->findElement(WebDriverBy::id('password'))
this->elementSubmit = this->webDriver->findElement(WebDriverBy::id('submit'))
}
getPageUrl() {
return '/login/'
}
fillUser(value) {
this->elementUsername->sendKeys(value)
}
fillPassword(value) {
this->elementPassword->sendKeys(value)
}
submitValid() {
this->elementSubmit->submit()
return new DashboardPage(this->webDriver, this->context, true)
}
submitInvalid() {
this->elementSubmit->submit()
return new LoginErrorPage(this->webDriver, this->context, true)
}
}
class DashboardPage extends BasePage {
validatePage() {
this->webDriver->findElement(WebDriverBy::id('welcomeMessage'))
}
getPageUrl() {
return '/dashboard/'
}
}
At this point your tests will only have to sort out the webdriver fixture but don't have to know anything about your pages
testValidCredentials:
login = new LoginPage(..)
login->fillUser('john')
login->fillPassword('aa')
dashboard = login->submitValid()
testInvalidCredentials:
login = new LoginPage(..)
login->fillUser('john')
login->fillPassword('aa')
loginError = login->submitInvalid()
testWelcomeMessage:
dashboard = new DashboardPage(..)
// a bad (but short enough) example, don't actually do this
assert(true, regexp('welcome', dashboard->getSource))
L.E.
From a testing perspective you have to know your expected result. Another approach would be to have a single submit that accepts expected page object as param
testInvalidCredentials:
login = new LoginPage(..)
login->fillUser('john')
login->fillPassword('aa')
loginError = login->submit('LoginErrorPage')
assertContains('invalid login', loginError->getErrorMessages())
But after writing 100 tests you'll find this to be too verbose and, if the page received after a successful submit changes, you'll have a lot of rewriting to do.

Unable to find browser object in RFT 8.5

I installed RFT 8.5 and JRE 7. When I run the scripts it's not finding browser object.
Below is the code which I have used in RFt to find the brwoser object.
Dim Allobjects() as TestObeject
Allobjects=RootTestObject.GetRootTestObject.Find(".class","Html.HtmlBrowser"))
Here it's is returning Allbects.lenth=0. Because of the I am getting struck.
Anybody can help me how to resolve this issue.
Note: I am using IE8
I was not able to find the browsers using RootTestObject either. But it is possible to find the browser windows using the Html domains:
startApp("Google");
startApp("asdf");
sleep(5);
DomainTestObject[] dtos = getDomains();
List<DomainTestObject> htmlDomains = new ArrayList<DomainTestObject>();
for (DomainTestObject dto : dtos) {
if (dto.getName().equals("Html")) {
htmlDomains.add(dto);
}
}
List<BrowserTestObject> browsers = new ArrayList<BrowserTestObject>();
for (DomainTestObject htmlDomain : htmlDomains) {
TestObject[] tos = htmlDomain.getTopObjects();
for (TestObject to : tos) {
if (to.getProperty(".class").equals("Html.HtmlBrowser")) {
browsers.add((BrowserTestObject) to);
}
}
}
System.out.println("Found " + browsers.size() + " browsers:");
for (BrowserTestObject browser : browsers) {
System.out.println(browser.getProperty(".documentName"));
}
Output:
Found 2 browsers:
https://www.google.ch/
http://www.asdf.com/
First, I start 2 browsers. Then I get all Html domain test objects. After that, I get all top objects and check whether their class is Html.HtmlBrowser.
I hope there is a simpler solution—looking forward to seeing one :)
Try the below code Snippet:
Dim Allobjects() As TestObject
Allobjects = Find(AtDescendant(".class", "Html.HtmlBrowser"))
Hope it helps.
Browser is a toplevel window so what you can do is :
Dim Allobjects() as TestObeject
Allobjects=Find(AtChild(".class","Html.HtmlBrowser"))
'The above code expects the browser to be statically enabled , also RootTestObject is not needed as implicitly RFT will use the RootTestObject if no anchor is provided.
Also if the browser is not statically enabled then you could also use:
DynamicEnabler.HookBrowsers() API so that browsers get enabled.

Selenium build list of 404s

Is it possible to have Selenium crawl a TLD and incrementally export a list of any 404's found?
I'm stuck on a Windows machine for a few hrs and want to run some tests before back to the comfort of *nix...
I don't know Python very well, nor any of its commonly used libraries, but I'd probably do something like this (using C# code for the example, but the concept should apply):
// WARNING! Untested code here. May not completely work, and
// is not guaranteed to even compile.
// Assume "driver" is a validly instantiated WebDriver instance
// (browser used is irrelevant). This API is driver.get in Python,
// I think.
driver.Url = "http://my.top.level.domain/";
// Get all the links on the page and loop through them,
// grabbing the href attribute of each link along the way.
// (Python would be driver.find_elements_by_tag_name)
List<string> linkUrls = new List<string>();
ReadOnlyCollection<IWebElement> links = driver.FindElement(By.TagName("a"));
foreach(IWebElement link in links)
{
// Nice side effect of getting the href attribute using GetAttribute()
// is that it returns the full URL, not relative ones.
linkUrls.Add(link.GetAttribute("href"));
}
// Now that we have all of the link hrefs, we can test to
// see if they're valid.
List<string> validUrls = new List<string>();
List<string> invalidUrls = new List<string>();
foreach(string linkUrl in linkUrls)
{
HttpWebRequest request = WebRequest.Create(linkUrl) as HttpWebRequest;
request.Method = "GET";
// For actual .NET code, you'd probably want to wrap this in a
// try-catch, and use a null check, in case GetResponse() throws,
// or returns a type other than HttpWebResponse. For Python, you
// would use whatever HTTP request library is common.
// Note also that this is an extremely naive algorithm for determining
// validity. You could just as easily check for the NotFound (404)
// status code.
HttpWebResponse response = request.GetResponse() as HttpWebResponse;
if (response.StatusCode == HttpStatusCode.OK)
{
validUrls.Add(linkUrl);
}
else
{
invalidUrls.Add(linkUrl);
}
}
foreach(string invalidUrl in invalidUrls)
{
// Here is where you'd log out your invalid URLs
}
At this point, you have a list of valid and invalid URLs. You could wrap this all up into a method that you could pass your TLD URL into, and call it recursively with each of the valid URLs. The key bit here is that you're not using Selenium to actually determine the validity of the links. And you wouldn't want to "click" on the links to navigate to the next page, if you're truly doing a recursive crawl. Rather, you'd want to navigate directly to the links found on the page.
There are other approaches you might take, like running everything through a proxy, and capturing the response codes that way. It depends a little on how you expect to structure your solution.