I am using PHPUnit 3.4.12 to drive my selenium tests. I'd like to be able to get a screenshot taken automatically when a test fails. This should be supported as explained at http://www.phpunit.de/manual/current/en/selenium.html#selenium.seleniumtestcase.examples.WebTest2.php
class WebTest
{
protected $captureScreenshotOnFailure = true;
protected $screenshotPath = 'C:\selenium';
protected $screnshotUrl = 'http://localhost/screenshots';
public function testLandingPage($selenium)
{
$selenium->open("http://www.example.com");
$selenium->fail("fail");
...
}
}
As you can see, I am making the test to fail and in theory when it does it should take a screenshot and put it in C:\selenium, as I am running the selenium RC server on Windows.
However, when I run the test it will just give me the following:
[root#testbox selenium]$ sh run
PHPUnit 3.4.12 by Sebastian Bergmann.
F
Time: 8 seconds, Memory: 5.50Mb
There was 1 failure:
1) WebTest::testLandingPage
fail
/home/root/selenium/WebTest.php:32
FAILURES!
Tests: 1, Assertions: 0, Failures: 1.
I do not see any screenshot in C:\selenium. I can however get a screenshot with $selenium->captureScreenshot("C:/selenium/image.png");
Any ideas or suggestions most welcome.
Thanks
The error handling of this is rather poor on phpunit's part; if everything isn't perfect it will silently ignore your other options without a warning.
As Dave mentioned, if any of the variables are misspelled it will silently not work, and you might also try assigning them to the instance in your setUp.
Also, not every condition triggers a screenshot. Try $selenium->assertTextPresent("foobarbaz") instead of your $selenium->fail() for a sanity check.
you may try adding these line of codes
try {
$this->assertTrue($this->isTextPresent("You searched for \"Brakes\" (2 matches)"));
} catch (PHPUnit_Framework_AssertionFailedError $e) {
array_push($this->verificationErrors, $e->toString());
$this->drivers[0]->captureEntirePageScreenshot($this->screenshotPath . DIRECTORY_SEPARATOR . rawurlencode($this->getLocation()) . '.png');
}
I recently had this error because I was following the tutorial.
The first example in the documentation is for PHPUnit_Extensions_Selenium2TestCase. All of the others on the page are for PHPUnit_Extensions_SeleniumTestCase.
Perhaps change
extends PHPUnit_Extensions_Selenium2TestCase
to
extends PHPUnit_Extensions_SeleniumTestCase
Related
i have a problem that i don't exactly know how to solve. I'm implementing an E2E test in which using selenium i need to click in a Link and check that is sending me to the right URL.
Here starts the problem...
There are 3 possibilities, mix of 2 types of links, just one type of link or the other type of link. No problems with the situations in which there are both types of links but when there is just one type when it searches for the identifier we use for the links that are not in page it gives me a timeoutException. This is not a failure because it's a posible situation but i will like to know if there is a way in which to check that if it finds no links it asserts that the exception is thrown.
I though using a runCatching (or try catch) wait for the link to appear and if it doesn't appear the test asserts that when i look for the element the timeout exception is thrown again.
It smells a bit for me this way of doing it and i don't know if it's correctly done.
EDIT: Im ussing AssertK and JUnit5 for testing.
EDIT 2: I've done this, i dont know if it a correct way of doing it
runCatching {
driver.waitFor(numberOfWidgetsToBeMoreThan(BrowserSelector.cssSelector(OFFERS_WITH_PRICE_AND_DATE), 0), ofMillis(2000))
}.onFailure {
assertThrows<WaitTimeoutException> {
findLink(OFFERS_WITH_PRICE_AND_DATE)
}
}.onSuccess {
val widget = findLink(OFFERS_WITH_PRICE_AND_DATE)
widget.click()
assertThat(driver.url).contains(NO_DATE_TEXT)
}
I'm not sure I understood your problem correctly, but you can use assertFails to assert that a piece of code throws an exception:
#Test
fun test() {
val exception = assertFails {
// some code that should throw
}
// some more assertions on the type of exception etc. may go here
}
because I think I recommend CodenameOne to be used for development I try to investigate deeper into it. I just tried out the Test Recorder which generated a test class.
Now my question: How-to use this test class? Do I have to call the test method from the existing UI code using e.g. a button to start it?
Generated code:
public class RegisterUserATest extends AbstractTest {
public boolean runTest() throws Exception {
clickButtonByName("Register");
keyPress(16);
keyPress(65);
waitFor(112);
keyPress(65);
setText("Name", "A");
keyPress(16);
keyPress(65);
waitFor(113);
keyPress(16);
waitFor(1);
keyPress(97);
setText("Email", "");
setText("Password", "A");
clickButtonByName("Register");
return true;
}
}
I think the solution is very easy but I cannot see it.
If this is on NetBeans right click the project and select "Test". On IntelliJ/IDEA it's under Codename One -> Run Tests.
Notice that the latter has a bug in it that will be fixed in the release coming tomorrow (October 7th 2016).
I am new in appium. I am running following test for IOS
#Test
public void Login() throws InterruptedException{
Thread.sleep(3000);
driver.findElement(By.xpath("//window[1]/textfield[9]")).sendKeys("john");
driver.findElement(By.xpath("//window[1]/secure[1]")).sendKeys("asdf1234");
driver.findElement(By.name("btn checkbox")).click();
driver.findElement(By.name("Login")).click();
Thread.sleep(6000);
here it works fine, it logins, but when I comment driver.findElement(By.name("btn checkbox")).click(); this line it does not login, but shows test is passed, there is no single exception
please can anybody tell me what is problem here?
It seems that your test doesn't check if it's logged in or not. You're performing the actions to make it login, but you're not actually validating anything. You're smoke testing.
What you want to do here...
Build something that lets you check for any indicator that you have finished the login process. (like welcome label!)
Use an explicit wait to do this.
Define your success criteria. Login usually takes 10 seconds. Our success criteria may be anything under 25 seconds.
If it doesn't find the element after 25 seconds in the exception that's thrown (TimeoutException), you should return something like "None", else return the element.
Should look something like this:
WebElement welcomeLabel = (new WebDriverWait(driver, 25))
.until(ExpectedConditions.presenceOfElementLocated(By.name("welcomeLabel")));
And then you'll say something like this:
Assert.assertIsNotNone(welcomeLabel) this assertion is what makes this NOT a smoke test
Of course that's happening. The only thing you do is clicking on that button. Appium is doing exactly that, doesn't encounter any problem, and returns a 'test passed'.
You have to write some kind of a test yourself, to know if you're logged in or not.
For example by searching for a logout button at the next page.
Example:
Assert.assertTrue(wd.findElement(By.name("Logout")).isDisplayed());
I have Teamcity 7.1 and around 1000 tests. Many tests are unstable and fail randomly. Even a single test fails the whole build fails and to run a new build takes 1 hour.
So I would like to be able to configure Teamcity to rerun failed tests within the same build a specific number of time. Any success for a test should be considered as success, not a failure. Is it possible?
Also now is tests in some module fail Teamcity does not proceed to the next module. How to fix it?
With respect, I think you might have this problem by the wrong end. A test that randomly fails is not providing you any value as a metric of deterministic behavior. Either fix the randomness (through use of mocks, etc.) or ignore the tests.
If you absolutely have to I'd put loops round some of your test code and catch say 5 failures before throwing the exception as a 'genuine' failure. Something like this C# example would do...
public static void TestSomething()
{
var counter = 0;
while (true)
{
try
{
// add test code here...
return;
}
catch (Exception) // catch more specific exception(s)...
{
if (counter == 4)
{
throw;
}
counter++;
}
}
}
While I appreciate the problems that can arise with testing asych code, I'm with #JohnHoerr on this one, you really need to fix the tests.
Rerun failed tests feature is part of Maven Surefire Plugin, if you execute mvn -Dsurefire.rerunFailingTestsCount=2 test
then tests will be run until they pass or the number of reruns has been exhausted.
Of course, -Dsurefire.rerunFailingTestsCount can be used in TeamCity or any other CI Server.
See:
http://maven.apache.org/surefire/maven-surefire-plugin/examples/rerun-failing-tests.html
I try to write a custom step that's generate step
my code looks like :
/**
* #Then /^Check_raoul$/
*/
public function checkRaoul()
{
// grab the content ...
// get players ...
$to_return = array();
foreach ($players as $player) {
$player = $player->textContent;
if (preg_match('/^.*video=([^&]*)&.*$/', $player, $matches))
{
array_push($to_return, new Step\Then('I check the video of id "'.$matches[1].'"'));
}
}
return $to_return;
}
/**
* #Then /^I check the video of id "([^"]*)"$/
*/
public function iCheckTheVideoOfId($id)
{
// ...
}
works fine but when integrating to jenkins or un cli, if many executions of iCheckTheVideoOfId fail, I see just one error. I wish generate a number of steps equal to the number of iCheckTheVideoOfId calls
what I a doing wrong ?
We abandoned using Jenkins to do BDD checks due to the differences in how test feedback is presented and what Jenkins is capable of. We found that just running our suites locally and then a full check before pushing code to the repo produced better results and helped everyone get better at using the framework.
To answer your question directly I would suggest configuring your jenkins job to not fail when a test fails.
This can be accomplished by not outputting results at all. You can modify your command line options to not output failures at all and just log results to an output file. You could then run a script at the end to check for failures.