Is it possible to skip tests if an element doesn't exists? - selenium

I'm writing a test script for a website. The website has tabs (navigation link).
Let's say, the element of that tab is id=email.
If that doesn't exist, is it possible to skip the whole test. All test cases are based on that tab (id=email).
Right now, I have:
if($this->isElementPresent("id=email") == true) {
perform these steps
}
And all the test scripts are like that, so it's just opening the browser and closing it without testing anything. It's passing them all. Is it possible to skip tests if that element doesn't exists?

I would configure the test to use the same setting to see if the fields exist or not, instead of skipping tests. Mock your configuration, and set to disabled, then the tests should look for the absence of the fields, and test accordingly. Then, set the configuration to be enabled, and test that the field is there and test accordingly.
When the field is set to be disabled, you can also use the $this->markTestSkipped(). It is documented in the PHPUnit help Chapter 9. Incomplete and Skipped Tests.
Sample:
public function testEmailIdAbsent()
{
if($this->MockConfiguration['Email'] == 'disabled') // Or however your configuration looks
{
$this->assertFalse($Foo->IsElementPresent("id=email", "Email ID is present when disabled.");
...
}
}
public function testEmailIdPresent()
{
if($this->MockConfiguration['Email'] == 'enabled') // Or however your configuration looks
{
$this->assertTrue($Foo->IsElementPresent("id=email", "Email ID is not present when enabled.");
...
}
}
public function testEmailId()
{
if($this->MockConfiguration['Email'] == 'disabled') // Or however your configuration looks
{
$this->markTestSkipped('Email configuration is disabled.');
}
}

Related

Is it possible to ignore Firefox for running the feature?

I am trying to use tags for ignoring Firefox for running few feature files.
I mean something like:
#Browser_Chrome
Feature: My Functionality
#ignore #Browser_Firefox
Scenario:
Given blablabla"
Is it possible?
Now looks like #ignore tag is working in any way
You can always check within your tests using getCapabilities(https://seleniumhq.github.io/selenium/docs/api/java/org/openqa/selenium/remote/RemoteWebDriver.html#getCapabilities--):
assumeTrue( !(driver as RemoteWebDriver).capabilities
.getCapability("browserName").equals("firefox") ) )
More info about assume: https://junit.org/junit4/javadoc/4.12/org/junit/Assume.html
If you want to run almost all tests on Firefox, but want to exclude some. I would use a tag with a matching beforescenario which does the ignoring.
Feature file:
#BeforeExcludeFF #ThisIsMyFeature
Scenario: Dont run this when the browser is Firefox
Given blablabla
When blablabla
Then blablabla
Code behind:
[BeforeScenario("BeforeExcludeFF", Order = 1)]
public void BeforeExcludeFF()
{
// we need to get the current browser to know if it is firefox
ICapabilities capabilities = ((RemoteWebDriver)driver).Capabilities;
string browser = capabilities.GetCapability("browserName");
if(browser.ToLower() == "firefox")
{
// don't forget to log WHY we ignore this scenario on FF
LogToConsoleAndIgnore(#"We dont want to run this scenario because ");
}
}
private void LogToConsoleAndIgnore(string reason)
{
Console.WriteLine(reason);
Assert.Ignore(reason);
}
Hope this helps.

Selenium Page Object pattern error pages handling

I've got a generic question concerning error pages.
Imagine a simple use case, good (1) and bad (2) authentication.
In case (1), we've got the index page.
In case (2), we've got a specific error page.
The point is, I've got a page object LoginPage, and the submitLoginForm should return the next page. I click on it with a bad login form filled in.
Then, we've got 2 options for handling it:
- should we create a LoginErrorPage and give LoginPage a submitNonValidLoginForm returning this LoginErrorPage ?
- should we useLoginPage with submitLoginForm returning the 'right' navigation page IndexPage, and in the Junit test, assert on the driver real state (hasn't got IndexPage elements but some others).
I hope I'm clear !
Thank you
From my personal experience I can say it tends to be better to have different Page Objects for (conceptually) different pages, even when we're talking about the same URL with different content.
So I suggest following your first option, creating a LoginError Page Object. Another thing is that the page validation should be done in your Page Object, not as a test because your creating a dependency between the test and Selenium directly.
I.E (in a very pseudocodish way)
class BasePage {
constructor (driver, context, isLoaded = false) {
this->webDriver = driver
//clicking links or submitting forms from other page objects
//will trigger the page load at driver level so we don't want to trigger a page reload
if (isLoaded) {
this->loadPage()
}
this->validatePage()
}
loadPage() {
this->webDriver->get(this->getPageUrl)
}
abstract validatePage()
abstract getPageUrl()
}
class LoginPage extends BasePage{
validatePage() {
this->elementUsername = this->webDriver->findElement(WebDriverBy::id('username'))
this->elementPassword = this->webDriver->findElement(WebDriverBy::id('password'))
this->elementSubmit = this->webDriver->findElement(WebDriverBy::id('submit'))
}
getPageUrl() {
return '/login/'
}
fillUser(value) {
this->elementUsername->sendKeys(value)
}
fillPassword(value) {
this->elementPassword->sendKeys(value)
}
submitValid() {
this->elementSubmit->submit()
return new DashboardPage(this->webDriver, this->context, true)
}
submitInvalid() {
this->elementSubmit->submit()
return new LoginErrorPage(this->webDriver, this->context, true)
}
}
class DashboardPage extends BasePage {
validatePage() {
this->webDriver->findElement(WebDriverBy::id('welcomeMessage'))
}
getPageUrl() {
return '/dashboard/'
}
}
At this point your tests will only have to sort out the webdriver fixture but don't have to know anything about your pages
testValidCredentials:
login = new LoginPage(..)
login->fillUser('john')
login->fillPassword('aa')
dashboard = login->submitValid()
testInvalidCredentials:
login = new LoginPage(..)
login->fillUser('john')
login->fillPassword('aa')
loginError = login->submitInvalid()
testWelcomeMessage:
dashboard = new DashboardPage(..)
// a bad (but short enough) example, don't actually do this
assert(true, regexp('welcome', dashboard->getSource))
L.E.
From a testing perspective you have to know your expected result. Another approach would be to have a single submit that accepts expected page object as param
testInvalidCredentials:
login = new LoginPage(..)
login->fillUser('john')
login->fillPassword('aa')
loginError = login->submit('LoginErrorPage')
assertContains('invalid login', loginError->getErrorMessages())
But after writing 100 tests you'll find this to be too verbose and, if the page received after a successful submit changes, you'll have a lot of rewriting to do.

how effectively i can use ui automation recorded test cases for other releases of the application

I've web application, I want recorded test cases and play back that cases.
1st release of the application, I've login module which has user name and password and recorded 500 test cases for entire application. Among 500 test cases 200 test cases are using logging by username and password.
2nd release of the application, I've login module which has only username, so I want use previous recorded test cases by modifications not like go to all the test cases change the password field. Here I'm having some requirements for the testing framework
Can I get what are test cases will effect by changing field like in above example?
Is there any way to update in simple, not going like in all the files and changing
I've used different UI Automation testing tools and record & Play back options are very nice, but I could not find the way I want in the UI Automation test framework.
Is there any Framework available which does the job for me?
Thanks in advance.
This is a prime example of why you never should record a selenium test case. Whenever you want to update something like login you have to change them all.
What you should do is create a test harness/framework for your application.
1.Start with creating a class for each webpage with 1 function for each element you want to be able to reach.
public By username(){
return By.cssSelector("input[id$='username']"); }
2.Create helper classes where you create sequences which you use often.
public void login(String username, String password){
items.username().sendkeys(username);
items.password().sendkeys(password);
}
3.In your common test setup add your login function
#BeforeMethod(alwaysRun = true)
public void setUp() {
helper.login("user","password");
}
This give you the opportunity to programmaticly create your test cases. So for example if you want to use the same test cases for a different login module where password element is not present it could be changed like this.
items.username().sendkeys(username);
if(isElementPresent(items.password())
items.password().sendkeys(password);
The function "isElementPresent" could look like this
public boolean isElementPresent(By locator){
try {
driver.findElement(locator);
logger.trace( "Element " + stripBy(locator) + " found");
} catch (NoSuchElementException e) {
logger.trace( "Element " + stripBy(locator) + " not found");
return false;
}
return true;
}

What is the difference in setBrowserUrl() and url() in Selenium 2 web driver for phpunit?

In many examples, I have seen calls made to both webdriver->setBrowserURL(url) and webdriver->url(url). Why would I want to use one instead of the other. One such example shows using both in the same manner (taken from the phpunit manual):
<?php
class WebTest extends PHPUnit_Extensions_Selenium2TestCase
{
protected function setUp()
{
$this->setBrowser('firefox');
$this->setBrowserUrl('http://www.example.com/');
}
public function testTitle()
{
$this->url('http://www.example.com/');
$this->assertEquals('Example WWW Page', $this->title());
}
}
?>
Why would setBrowserUrl() be called once in setup -- and then url() be called with the identical url in the test case itself?
In other examples, I've seen url() called with just a path for the url. What is the proper usage here? I can find almost no documentation on the use of url().
setBrowserUrl() sets a base url, allowing you to use relative paths in your tests.
The example from the phpunit manual is kind of confusing - I believe setBrowserUrl() is being used during setup simply because it'll throw an error without it:
public function start()
{
if ($this->browserUrl == NULL) {
throw new PHPUnit_Framework_Exception(
'setBrowserUrl() needs to be called before start().'
);
}
$this->url will use this base if a relative path is given.

How can I have a domain object's .save() method fail in an integration test?

For an integration test, I want to have a .save() intentionally in order to test the according else-condition.
My class under test does this:
From UserService.groovy:
User user = User.findByXyz(xyz)
if (user) {
// foo
if (user.save()) {
// bar
} else {
// I WANT TO GET HERE
}
}
The approaches I've tried so far failed:
What I've tried in in UserServiceTests.groovy:
def uControl = mockFor(User)
uControl.demand.save { flush -> null } // in order to test a failing user.save()
def enabledUser = userService.enableUser(u.confirmationToken)
uControl.verify()
// or the following:
User.metaClass.'static'.save = { flush -> null } // fails *all* other tests too
How can I get to the else-block from an integration test correctly?
You should almost never have a need for mocking or altering the metaclass in integration tests - only unit tests.
If you want to fail the save() call just pass in data that doesn't validate. For example all fields are not-null by default, so using def user = new User() should fail.
maybe you could try changing the 'validate' to be something else - by using the same meta class programming that u have shown .
That way, if the validate fails, the save will certainly fail
What I do in such cases:
I always have at least one field which is not null.
I simply don't set it and then call .save()
If you want to achieve this on an object already in the database, just load it using find or get and set one of the not null values to null and then try to save it.
If you don't have Config.groovy configured to throw exceptions on failures when saving it will not throw the exception, it simply won't save it [you can call .validate() upfront to determine whether it will save or not and check object_instance.errors.allErrors list to see the errors].