How to test Jobs in playframework? - testing

I have:
#OnApplicationStart
public class SomeClass {
.. doJob() ...
}
How I can test it in my Unit Test that doJob() actually launched when application started?

I would argue that this is not a unit test, but an integration test.
You can test your Job, by simply calling it using the syntax new MyJob().now();, but as you are looking to test the #OnApplicationStart function, the you would be better off doing this as a Selenium test, and checking the data that you expect to be made available from the bootstrap job is present.

Related

Execute single selenium test multiple times in parallel

I was wondering if anyone had any advice of an easy way or executing a single selenium test multiple times in parallel.
I have 1 test that I would like to execute, it will then spin up 10 chrome instances and run the test 10 times in parallel, its an idea to test load/performance.
I could split the test up into individual classes and get them to run in parallel but this is a bit overkill, is there a simpler way of running this with Nunit?
Tests are written in c# and we are using NUnit at the test runner, we are using BDDfy for the test language.
Bit of a difficult question to write down but hope some people understand what I am trying to achieve
You can do it by adding multiple test cases on the same test, even if you don't have any parameters to pass to the test, and use ParallelScope.All on fixture to run all testcases within the fixture in parallel.
[TestFixture, Parallelizable(ParallelScope.All)]
public class MyTestFixture
{
private static readonly IEnumerable<string> TestCases = new List<string>(new string[10]);
[TestCaseSource(nameof(TestCases))]
public void SingleTestRepeatedMultipleTimes(string testCase)
{
//test steps
}
}

Retrieve output log when test fails during setup

I'm running automated unit tests with SpecFlow and Selenium. SpecFlow offers BeforeTestRun, BeforeFeature, and BeforeScenario attributes to execute code between tests at the appropriate time.
I'm also using log4net to log test output.
When a test fails during the test or during the BeforeScenario phase, I can see the output logged.
But when a test fails during BeforeTestRun or BeforeFeature, there is no output available.
This makes it difficult to diagnose when the test fails during the early stages on the remote testing server and all I have are the output logs.
Is there any way to use log4net to get output logs when the test fails before the individual test has begun?
You can implement custom method, with reference of TestResult and TestContext objects. And you can call it in [TearDown] or somewhere on end of the Test:
if (TestContext.CurrentContext.Result.Outcome.Status.ToString() == "Failed")
{
string message = TestContext.CurrentContext.Result.Message;
string logs = TestContext.CurrentContext.Result.StackTrace;
}
else if (TestContext.CurrentContext.Result.Outcome.Status.ToString() == "Passed")
{
//User defined action
}
else
{
string otherlog = TestContext.CurrentContext.Result.Outcome.Status.ToString();
}
Actually it's weird that it doesn't show you log when a tests is failed.
It shows it well for me:
What I suggest to try is to check that you log4net pushes all the logs to console. Actually if you haven't done some special manipulation, then your logger should by default have a console appender.
I initiate my logger like this:
private static readonly ILog log =
LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
Another guess is that maybe when your test fails on OneTimeSetup Fixture it doesn't have any log yet.

Selenium multiple test run fails

I am having one suit say Suite1
in Suite1 I am calling two class methods Test1 and Test2 respectively with test annotation #Test and having main method with #Beforesuite and #AfterSuite annotation - when I run the suite using Test NG - it runs only the second method and first method always fails and below it the trace of failed result.
I am just curious why always last test pass and this happens when I use selenium keyword but I use standard java keyword like system.out.println(); it pass the test

How to write same codeception acceptance test case with many different set of inputs

In codeception acceptance testing, how to run/write same test case for many different set of inputs.
Here is my sample acceptance test (I am using page object oncept)
loginCept.php code
$I = new AcceptanceTester($scenario);
$I->wantTo('perform actions and see result');
$I->login($I,$m);
Acceptance.php file
class Acceptance extends \Codeception\Module
{
public function login($I)
{
$I->amOnPage(login::$loginIndex);
$I->wait(2);
$I->fillField(login::$userName,"test#gmail.com");
$I->fillField(login::$password,"test");
$I->click(login::$submitButton);
$I->see(login::$assertionWelcome);
$I->wait(2);
$I->click(login::$logoutLink);
}
}
How do I run same login with multiple set of inputs in acceptance test.
However, I have tried passing inputs in an array by calling the test case in for loop by passing array values as input parameter. In acceptance.php, multiple set of inputs can be passed using if loop.
This runs the test as only 1 test case with different assertions.
But, it runs the test case until it fails for any inputs/assertion. If it fails for any of the assertions, then test case stops executing further & says test case failed.
You can pass parameters through to your login function just as you would with any php function:
loginCept.php code
$I = new AcceptanceTester($scenario);
$I->wantTo('perform actions and see result');
$I->login($I,"test#gmail.com","test");
Acceptance.php file
class Acceptance extends \Codeception\Module
{
public function login($I,$username,$password)
{
$I->amOnPage(login::$loginIndex);
$I->wait(2);
$I->fillField(login::$userName,$username);
$I->fillField(login::$password,$password);
$I->click(login::$submitButton);
$I->see(login::$assertionWelcome);
$I->wait(2);
$I->click(login::$logoutLink);
}
}
You'd then want to create a separate cept for each aspect of login that you are looking to test.
Edit:
What you're looking for in relation to one test running through a number of assertions, this breaks the conventions of automated testing. Each test (or cept in this case) should only ever test one aspect. For instance in logging in, you might have one for invalid username, invalid password, too many attempts, etc... Then when/if one test fails, you as the developer knows exactly what aspect has failed and which continue to pass. If all the aspects are wrapped up in one test, then you as the developer don't know the full picture until you start to debug.

How to set a timeout for every selenium test case?

I want my test cases to run only for a particular amount of time. Example: execution of every selenium test case should be restricted to 10 mins, it should not run beyond 10 mins... I tried out #Test, timeout=**milliseconds, but it didn't work out as per my example, I could not know where I am going wrong, can any of you kindly help me in sorting out this???? Thanks in advance.
It really depends on your language/testing environment as Selenium is only an automation framework, not an unit testing framework, so it doesn't have any concept about unit test or anything like that.
If you're using Visual Studio Unit Testing, you can use this to set unit test timeout (sample in C#):
[TestMethod(), Timeout(10000)]
public void TestLogin()
{
}
If, you are using TestNG or Junit test framework therefore #Test (Timeout = 10000) timeout syntax will be same for both.
#Test (Timeout = 10000)
public void testLogin(String username, String password){
driver.findElement(By.name("username")).sendKeys("username");
driver.findElement(By.name("password")).sendKeys("password");
driver.findElement(By.cssSelecter("submit")).click();
}