In a data-driven framework, we use Selenium along with TestNG to run multiple tests in parallel. How can the same be implemented in a keyword-driven framework?
In data-driven approach we can define every test case as a separate method and therefore we are able to command TestNG via annotations which methods to run and how many to run in parallel.
In keyword-driven approach, every test case is a separate Excel Sheet and multiple excel sheets in the same workbook make a test suite. How can these excel sheets/test cases be annotated/referred so as to run in parallel similar to the execution structure and process in data-driven framework?
One lame solution I thought of was a hybrid approach wherein creating methods which would call the excel sheet.
For eg.:
#Test
public void TestCase_001() {
// Read the keyword driven test case
// XLS_WorkbookName - The Excel Workbook or Test Suite containing multiple Test Cases
// XLS_SheetName - The Excel Sheet containing set of rows each of which contains ID of element, Operation to be performed and data to be used
ReadAndExecuteTestCase(XLS_WorkbookName_XYZ, XLS_SheetName_ABC);
}
#Test
public void TestCase_002() {
// Read the keyword driven test case
// XLS_WorkbookName - The Excel Workbook or Test Suite containing multiple Test Cases
// XLS_SheetName - The Excel Sheet containing set of rows each of which contains ID of element, Operation to be performed and data to be used
ReadAndExecuteTestCase(XLS_WorkbookName_ABC, XLS_SheetName_XYZ);
}
I'm not sure if they above example is the appropriate way to go about it. Requesting suggestions to the same. Thanks in advance.
One solution can be :
Have a master sheet of cases to execute, which acts as your suite.
Have your dataprovider read this master sheet and have a single #Test method which takes in the arguments of the testcase data.
This testcase basically reads the steps and executes - something like your ReadAnExecureTestCase method.
Make this dataprovider parallel and control using dataprovider thread count.
Related
I have the xUnitFileImport scheduled job configured in my polarion project (as described in Polarion documentation) to import e2e test results (formatted to JUnit test results)
<job cronExpression="0 0/5 * * * ? *" id="xUnitFileImport" name="Import e2e Tests Results" scope="system">
<path>D:\myProject\data\import-test-results\e2e-gitlab</path>
<project>myProject</project>
<userAccountVaultKey>myKey</userAccountVaultKey>
<maxCreatedDefects>10</maxCreatedDefects>
<maxCreatedDefectsPercent>5</maxCreatedDefectsPercent>
<templateTestRunId>xUnit Build Test</templateTestRunId>
<idRegex>(.*).xml</idRegex>
<groupIdRegex>(.*)_.*.xml</groupIdRegex>
</job>
This works and I get my test results imported into a new test run and new test cases are created. But if I run the import job multiple times (for each test run) it creates duplicate test case work items even though they have the same name, which leads to this situation:
Is there some way to tell the import job to reference the existing testcases to the
newly created test run, instead of creating new ones?
What i have done so far:
yes I checked that the "custom field for test case id" in the "testing > configuration" is configured
yes I checked that the field value is really set in the created test case
The current value in this field is e.g. ".Login" as i don't want the classnames in the report.
YES I still get the same behaviour with the classname set
In the scheduler I have changed the job parameter for the group id because it wasn't filled. New value is: <groupIdRegex>e2e-results-(.*).xml</groupIdRegex>
I checked that no other custom fields are interfering, only the standard fields are set
I checked that no readonly fields are present
I do use a template for the testcases as supported by the xUnitFileImport. The testcases are successfully created and i don't see anything that would interfere
However I do have a hyperlink set in the template (I'll try removing this soon™)
I changed the test run template from "xUnit Build test" to "xUnit Manual Test Upload" this however did not lead to any visible change
I changed the template status from draft to active. Had no change in behaviour.
I tripple checked all the fields in the created test cases. They are literally the same, which leads to the conclusion that no fields in the testcases interfere with referencing to them
After all this time i have invested now, researching on my own and asking on different forums, I am ready to call this a polarion bug unless someone proves me this functionality is working.
I believe you have to set a custom field that identifies the testcase with the xUnit file you're importing, for the importer to identify the testcase.
Try adding a custom field to the TestCase workitem and selecting it here.
Custom Field for Test Case ID option in settings
If you're planning on creating test cases beforehand, note that the ID is formatted form the {classname}.{name} for a given case.
I am new-ish to Selenium, so I use Katalon Automation Recorder through Chrome to quickly draft scripts.
I have a script that makes an account on a website, but I want to make more than one account at a time (using a catchall). Is there a way for Selenium/Katalon to alternate its input from a database of preset emails (CSV sort of thing) or even generate random values in-front of the #domain.com each time the script loops over?
Here is the current state of the script:
Thanks
As #Shivan Mishra mentioned, you have to do some data driven testing. In Katalon you can created test data in object repository (See https://docs.katalon.com/katalon-studio/docs/manage-test-data.html)
You can manage your test data in script like following example:
import static com.kms.katalon.core.testdata.TestDataFactory.findTestData
def data = findTestData('path/to/your/testdata/in/object repository')
for(int=0;i<data.getRowNumbers();i++){
def value = data.getValue(1, i)
// do any action with your value
}
I'm trying to implement unit tests in my company's project, and I'm running into some weird trouble trying to use a separate set of data in my database.
As I want tests to be performed in a confined environment, I'm looking for the easiest way to input data in a dedicated database. Long story short, to this extent, I decided to use a MySQL dump of inserted data.
This is basically my seeder code:
public function run()
{
\Illuminate\Support\Facades\DB::unprepared(file_get_contents(__DIR__ . '/data1.sql'));
}
Now here's the problem.
In my unit test, I can call the seeder, but :
If I call the seeder in the setUpBeforeClass(), it works. Although it doesn't fit my needs as I want to be able to invoke different sets of data for different tests
If I call the seeder within a test, the data is never inserted in the database (either with or without the transaction trait).
If I use DB::insert instead of ::raw or ::unprepared or ::statement without using a raw sql file, it works. But my inserts are too complicated for that.
Here's a few things I tried with the same results :
DB::raw(file_get_contents(__DIR__.'/database/data1.sql'));
DB::statement(file_get_contents(__DIR__ . '/database/data1.sql'));
$seeder = new CheckTestSeeder();
$seeder->run();
\Illuminate\Support\Facades\Artisan::call('db:seed', ['--class' => 'CheckTestSeeder']);
$this->seeInDatabase('jackpot.progressive', [
'name_progressive' => 'aaa'
]);
Any pointers on how to proceed and why I have different behaviors if I do that in the setUpBeforeClass() and within the test would be appreciated!
You may use Illuminate\Foundation\Testing\RefreshDatabase trait as explained here. If you need something more, you can override refreshTestDatabase method in RefreshDatabase trait.
protected function refreshTestDatabase()
{
parent::refreshTestDatabase();
\Illuminate\Support\Facades\Artisan::call('db:seed', ['--class' => 'CheckTestSeeder']);
}
In codeception acceptance testing, how to run/write same test case for many different set of inputs.
Here is my sample acceptance test (I am using page object oncept)
loginCept.php code
$I = new AcceptanceTester($scenario);
$I->wantTo('perform actions and see result');
$I->login($I,$m);
Acceptance.php file
class Acceptance extends \Codeception\Module
{
public function login($I)
{
$I->amOnPage(login::$loginIndex);
$I->wait(2);
$I->fillField(login::$userName,"test#gmail.com");
$I->fillField(login::$password,"test");
$I->click(login::$submitButton);
$I->see(login::$assertionWelcome);
$I->wait(2);
$I->click(login::$logoutLink);
}
}
How do I run same login with multiple set of inputs in acceptance test.
However, I have tried passing inputs in an array by calling the test case in for loop by passing array values as input parameter. In acceptance.php, multiple set of inputs can be passed using if loop.
This runs the test as only 1 test case with different assertions.
But, it runs the test case until it fails for any inputs/assertion. If it fails for any of the assertions, then test case stops executing further & says test case failed.
You can pass parameters through to your login function just as you would with any php function:
loginCept.php code
$I = new AcceptanceTester($scenario);
$I->wantTo('perform actions and see result');
$I->login($I,"test#gmail.com","test");
Acceptance.php file
class Acceptance extends \Codeception\Module
{
public function login($I,$username,$password)
{
$I->amOnPage(login::$loginIndex);
$I->wait(2);
$I->fillField(login::$userName,$username);
$I->fillField(login::$password,$password);
$I->click(login::$submitButton);
$I->see(login::$assertionWelcome);
$I->wait(2);
$I->click(login::$logoutLink);
}
}
You'd then want to create a separate cept for each aspect of login that you are looking to test.
Edit:
What you're looking for in relation to one test running through a number of assertions, this breaks the conventions of automated testing. Each test (or cept in this case) should only ever test one aspect. For instance in logging in, you might have one for invalid username, invalid password, too many attempts, etc... Then when/if one test fails, you as the developer knows exactly what aspect has failed and which continue to pass. If all the aspects are wrapped up in one test, then you as the developer don't know the full picture until you start to debug.
I designed a framework in RFT where the test cases are written in spreadsheet specifying the data source, object and keyword and a driver script which processes through all this data and routes it to the appropriate method for each test step all in a spreadsheet. Now I want to integrate this with RQM so that each of my test cases in the spreadsheet is shown as passed/failed in RQM. Any ideas?
You could implement now an algorithm to read those testcases in the spreadsheet and pass them to RQM as attachments with logTestResult.
For example:
logTestResult( <your attachment> , true );
And if you are already connected to RQM the adapter will attach files that you indicate automatically to RQM. So, at the end you will see step by step the results and if the script ends correctly RQM will show you the script as "passed".
Thanks for the answer Juan. I solved this by passing the testcase name from Script Argument part of RQM and fetching the arguments in my starter script as shown below:-
public void testMain(Object[] args) throws Exception
{
String n=args[0].toString();
logInfo("Parameter from RQM"+n);
ModuleDriver d=new ModuleDriver();
d.execute_main(n);
}
Since I have verification points setup for each of the steps in my test cases the results get reported based on each of those verification points in RQM which is what i needed.