IOS UI Testing issue for Logout scenario - xctest

I am facing an issue while writing UI test cases for IOS using XCTest framework.
Consider a 3 step signup process and 3 test cases for each step. And for running each test case, it requires the user to be logged out. So, I have written the code as
test1(){
//some code here for step 1
}
test2(){
test1()
//some code here for step 2
}
test3(){
test2()
//some code here for step 3
}
This works fine while running the test cases individually. But when running it collectively, the first test runs successfully but the second test fails, as it requires the registered user to logged out before running the text.
Now to solve this, I write the code as follows :
test1(){
//some code here for step 1
//logout code
}
test2(){
test1()
//some code here for step 2
//logout code
}
test3(){
test2()
//some code here for step 3
//logout code
}
Now the issue is, while running the second test case, it calls the first function which logs out the user and we cannot reuse the code.
Any better way to do this?

Yes, sure. Don't call test functions from another test function, but create new functions for actions, like this:
private func action1() {
//some code here for step 1
}
private func action2() {
action1()
//some code here for step 2
}
private func action3() {
action2()
//some code here for step 3
}
private func logoutAction() {
//logout code
}
Then, in your tests, call those action functions, like this:
func test1() {
action1()
logoutAction()
}
func test2() {
action2()
logoutAction()
}
func test3() {
action3()
logoutAction()
}

You should make use of the setup() and tearDown() methods of XCTest to reset the state of your application.
Also, don't call one test from another. Each test should set itself up and not rely on other tests. If there is common functionality between tests you can create a function in your test class that you call from each test.

Related

MockK mock method returning Interface Future

Hello I have following problem.
I am trying to mock call of injected executor
to execute given Callable immediately. Later in test arguments of methods called inside Callable are captured and arguments are asserted. Mock example see bellow.
Maven 3, jdk 10-slim, mockk 1.9
//this task should be executed by executor
private val taskCaptor = slot<Callable<Boolean>>()
private val asyncTaskExecutor: LazyTraceThreadPoolTaskExecutor = mockk<LazyTraceThreadPoolTaskExecutor>().apply {
//this was my 1st try, but resutt was java.lang.InstantiationError: java.util.concurrent.Callable
//every { submit(capture(taskCaptor)) } returns CompletableFuture.completedFuture(taskCaptor.captured.call())
//every { submit(any()) } returns CompletableFuture.completedFuture(true)
every { submit(ofType(Callable::class)) } returns FutureTask<Boolean>(Callable { true })
}
later on I have changed Callable interface to implementation, which I have created in tested class and I got another exception.
With same code as above exceptions was
java.lang.InstantiationError: java.util.concurrent.Future
which is return type of submit method.
Is my approach to mocking wrong?
not sure if this is the best way to implemented but for me it worked this way:
private val taskCaptor = slot<Callable<Boolean>>()
private val asyncTaskExecutor: LazyTraceThreadPoolTaskExecutor = mockk<LazyTraceThreadPoolTaskExecutor>().apply {
every { submit(ofType(Callable::class)) } returns mockFuture
every { mockFuture.get() } returns true
}

How to capture a screenshot with PHPUnit and Selenium2 when the test fails?

I'm using PHPUnit 4.6 and PHPUnit Selenium 1.4.2 with PhantomJS. I want capture a screenshot with the last page when selenium test fails.
In PHPUnit Manual there is a example for Selenium 1, but I'm trying use with Selenium 2, because I need use GhostDriver.
WebTestCase.php
class WebTestCase extends PHPUnit_Extensions_Selenium2TestCase
{
protected $captureScreenshotOnFailure = TRUE;
protected $screenshotPath = '/../../screenshots';
protected $screenshotUrl = 'http://localhost:8080/screenshots';
protected function setUp() {
$this->setBrowser('phantomjs');
$this->setBrowserUrl("http://localhost:8080");
$this->setHost('localhost');
}
}
Test.php
class Test extends WebTestCase
{
public function testTitle()
{
$this->url('');
assertEquals($this->title(), "My App");
}
}
But this not capture a screenshot.
$ vendor/bin/phpunit
PHPUnit 4.6-ge85198b by Sebastian Bergmann and contributors.
Configuration read from /MyApp/phpunit.xml
F
Time: 231 ms, Memory: 5.50Mb
There was 1 failure:
1) Test::testTitle
Failed asserting that two strings are equal.
--- Expected
+++ Actual
## ##
-''
+'My App'
/MyApp/tests/functional/Test.php:7
FAILURES!
Tests: 1, Assertions: 1, Failures: 1.
Hmm. The difference between SeleniumTestCase and Selenium2TestCase is not really good documented in the PHPUnit Manual. Also there is no clear separation and not enough usage examples for common cases on Selenium2.
$captureScreenshotOnFailure does not exist on
PHPUnit_Extensions_Selenium2TestCase.
Anyway, let's try putting this together:
<?php
class Test extends PHPUnit_Extensions_Selenium2TestCase
{
protected function setUp() {
$this->setBrowser('phantomjs');
$this->setBrowserUrl("http://localhost:8080");
$this->setHost('localhost');
}
public function testEnterText()
{
$this->url("/");
try {
$this->assertEquals($this->title(), "My App");
} catch (Exception $e) {
$this->screenshot( __DIR__.'/'.$this->getName().'-'.time(). '.png');
}
}
public function screenshot($file)
{
$filedata = $this->currentScreenshot();
file_put_contents($file, $filedata);
}
}
The try-catch-block: in the try part the assertion is done, if the assertion fails, the exception is caught. The catch-block gives us a chance to (grab details of the exception or re-throw it or) make a screenshot.
The main function is $this->currentScreenshot(), which was used in this test
https://github.com/giorgiosironi/phpunit-selenium/blob/master/Tests/Selenium2TestCaseTest.php#L733
ScreenshotListener
Please note that there is a ScreenshotListener around, which might be worth looking at:
https://github.com/giorgiosironi/phpunit-selenium/blob/master/PHPUnit/Extensions/Selenium2TestCase/ScreenshotListener.php
With usage example over at https://github.com/giorgiosironi/phpunit-selenium/blob/master/Tests/Selenium2TestCase/ScreenshotListenerTest.php
This might be a cleaner implementation to grab test failures and make shots.
Combining the solutions from #Jens A. Koch and #John Joseph, we get this:
<?php
class homepageTest extends PHPUnit_Extensions_Selenium2TestCase {
private $listener;
public function setUp() {
// Your screenshots will be saved in '/var/www/vhosts/screenshots/'
$screenshots_dir = '/var/www/vhosts/screenshots/';
$this->listener = new PHPUnit_Extensions_Selenium2TestCase_ScreenshotListener($screenshots_dir);
$this->setBrowser('firefox');
$this->setBrowserUrl('https://netbeans.org');
}
public function testNetbeansContainsHorses() {
$this->url('https://netbeans.org');
$this->assertContains('Equestrian', $this->title()); // Will fail on NetBeans page.
}
public function onNotSuccessfulTest($e) {
$this->listener->addError($this, $e, microtime(true));
parent::onNotSuccessfulTest($e);
}
}
A way of doing this across all your web tests is to override one of the test failure functions from the parent test case class, and capture your screenshot there.
Example:
class MyBaseWebTests
{
$this->directory = '/some_path_to_put_screenshots_in/';
// Override PHPUnit_Extensions_Selenium2TestCase::onNotSuccessfulTest
public function onNotSuccessfulTest(Exception $e)
{
$filedata = $this->currentScreenshot();
$file = $this->directory . get_class($this) . '.png';
file_put_contents($file, $filedata);
parent::onNotSuccessfulTest($e);
}
}
Now, after any of your web tests fail, they will dump a screenshot in that folder with the name of the web test class as the filename.
Use this to save screenshot..very useful in case of headless browser.
$fp = fopen('path/35.png', 'wb');
fwrite($fp, $this->currentScreenshot());
fclose($fp);
sleep(1);

markTestSkipped() not working with sausage-based Selenium tests via Sauce Labs

I am using the sausage framework to run parallelized phpunit-based Selenium web driver tests through Sauce Labs. Everything is working well until I want to mark a test as skipped via markTestSkipped(). I have tried this via two methods:
setting markTestSkipped() in the test method itself:
class MyTest
{
public function setUp()
{
//Some set up
parent::setUp();
}
public function testMyTest()
{
$this->markTestSkipped('Skipping test');
}
}
In this case, the test gets skipped, but only after performing setUp, which performs a lot of unnecessary work for a skipped test. To top it off, phpunit does not track the test as skipped -- in fact it doesn't track the test at all. I get the following output:
Running phpunit in 4 processes with <PATH_TO>/vendor/bin/phpunit
Time: <num> seconds, Memory: <mem used>
OK (0 tests, 0 assertions)
The other method is by setting markTestSkipped() in the setUp method:
class MyTest
{
public function setUp()
{
if (!shouldRunTest()) {
$this->markTestSkipped('Skipping test');
} else {
parent::setUp();
}
}
protected function shouldRunTest()
{
$shouldrun = //some checks to see if test should be run
return $shouldrun;
}
public function testMyTest()
{
//run the test
}
}
In this case, setUp is skipped, but the test still fails to track the tests as skipped. phpunit still returns the above output. Any ideas why phpunit is not tracking my skipped test when they are executed in this fashion?
It looks like, at the moment, there is no support for logging markTestSkipped() and markTestIncomplete() results in PHPunit when using paratest. More accurately, PHPunit won't log tests which call markTestSkipped() or markTestIncomplete() if called with arguments['junitLogfile'] set -- and paratest calls PHPunit with a junitLogfile.
For more info, see: https://github.com/brianium/paratest/issues/60
I suppose I can hack away at either phpunit or paratest...

In MSTest how to check if last test passed (in TestCleanup)

I'm creating web tests in Selenium using MSTest and want to take a screenshot everytime a test fails but I don't want to take one every time a test passes.
What I wanted to do is put a screenshot function inside the [TestCleanup] method and run it if test failed but not if test passed. But how do I figure out if a last test passed?
Currently I'm doing bool = false on [TestInitialize] and bool = true if test runs through.
But I don't think that's a very good solution.
So basically I'm looking for a way to detect if last test true/false when doing [TestCleanup].
Solution
if (TestContext.CurrentTestOutcome != UnitTestOutcome.Passed)
{
// some code
}
The answer by #MartinMussmann is correct, but incomplete. To access the "TestContext" object you need to make sure to declare it as a property in your TestClass:
[TestClass]
public class BaseTest
{
public TestContext TestContext { get; set; }
[TestCleanup]
public void TestCleanup()
{
if (TestContext.CurrentTestOutcome != UnitTestOutcome.Passed)
{
// some code
}
}
}
This is also mentioned in the following post.

Using Rhino Mocks, why does invoking a mocked on a property during test initialization return Expected call #1, Actual call #0?

I currently have a test which tests the presenter I have in the MVP model. On my presenter I have a property which will call into my View, which in my test is mocked out. In the Initialization of my test, after I set my View on the Presenter to be the mocked View, I set my property on the Presenter which will call this method.
In my test I do not have an Expect.Call for the method I invoke, yet when I run I get this Rhino mock exception:
Rhino.Mocks.Exceptions.ExpectationViolationException: IView.MethodToInvoke(); Expected #1, Actual #0..
From what I understand with Rhino mocks, as long as I am invoking on the Mock outside the expecting block it should not be recording this. I would imagine the test to pass. Is there a reason it is not passing?
Below is some code to show my setup.
public class Presenter
{
public IView View;
public Presenter(IView view)
{
View = view
}
private int _property;
public int Property
get { return _property;}
set
{
_property = value;
View.MethodToInvoke();
}
}
... Test Code Below ...
[TestInitialize]
public void Initilize()
{
_mocks = new MockRepository();
_view = _mocks.StrictMock<IView>();
_presenter = new Presenter(_view);
_presenter.Property = 1;
}
[TestMethod]
public void Test()
{
Rhino.Mocks.With.Mocks(_mocks).Expecting(delegate
{
}).Verify(delegate
{
_presenter.SomeOtherMethod();
});
}
Why in the world would you want to test the same thing each time a test is run?
If you want to test that a specific thing happens, you should check that in a single test.
The pattern you are using now implies that you need to
- set up prerequisites for testing
- do behavior
- check that behavior is correct
and then repeat that several times in one test
You need to start testing one thing for each test, and that help make the tests clearer, and make it easier to use the AAA syntax.
There's several things to discuss here, but it certainly would be clearer if you did it something like:
[TestMethod]
ShouldCallInvokedMethodWhenSettingProperty()
{
var viewMock = MockRepository.GenerateMock<IView>()
var presenter = new Presenter(viewMock);
presenter.Property = 1;
viewMock.AssertWasCalled(view => view.InvokedMethod());
}
Read up more on Rhino Mocks 3.5 syntax here: http://ayende.com/Wiki/Rhino+Mocks+3.5.ashx
What exactly are you trying to test in the Test method?
You should try to avoid using strict mocks.
I suggest using the Rhino's AAA syntax (Arrange, Act, Assert).
The problem lied with me not understanding the record/verify that is going on with Strict mocks. In order to fix the issue I was having this is how I changed my TestInitilize function. This basicaly does a quick test on my intial state I'm setting up for all my tests.
[TestInitialize]
public void Initilize()
{
_mocks = new MockRepository();
_view = _mocks.StrictMock<IView>();
_presenter = new Presenter(_view);
Expect.Call(delegate { _presenter.View.InvokedMethod(); });
_mocks.ReplayAll();
_mocks.VerifyAll();
_mocks.BackToRecordAll();
_presenter.Property = 1;
}