How to define run once context steps with Gauge? - testing

Using Gauge we can repeat a set of steps before each scenario using Context Steps right after a test specification heading. For example:
Delete project
==============
* User log in as "mike"
Delete single project
---------------------
* Delete the "example" project
* Ensure "example" project has been deleted
Delete multiple projects
------------------------
* Delete all the projects in the list
* Ensure project list is empty
In the above Delete Project test specification, the context step User log in as "mike" is going to be executed twice, one time for each of the two detete scenarios.
How to define steps that run once and before all scenarios of a test specification?

Since you cannot have it run once through the spec file a workaround could be to use the suite store.
public void loginAsMike(){
if((boolean) DataStoreFactory.getSuiteDataStore().get('loggedIn')){
//execute steps
DataStoreFactory.getSuiteDataStore().put('loggedIn', true);
}
}
This way it will only run once. The only issue here would be if you were to run multiple tests in parallel. However if your only logging in as mike in one spec file only then this is a good solution.

Related

Polarion: xUnitFileImport creates duplicate testcases instead of referencing existing ones

I have the xUnitFileImport scheduled job configured in my polarion project (as described in Polarion documentation) to import e2e test results (formatted to JUnit test results)
<job cronExpression="0 0/5 * * * ? *" id="xUnitFileImport" name="Import e2e Tests Results" scope="system">
<path>D:\myProject\data\import-test-results\e2e-gitlab</path>
<project>myProject</project>
<userAccountVaultKey>myKey</userAccountVaultKey>
<maxCreatedDefects>10</maxCreatedDefects>
<maxCreatedDefectsPercent>5</maxCreatedDefectsPercent>
<templateTestRunId>xUnit Build Test</templateTestRunId>
<idRegex>(.*).xml</idRegex>
<groupIdRegex>(.*)_.*.xml</groupIdRegex>
</job>
This works and I get my test results imported into a new test run and new test cases are created. But if I run the import job multiple times (for each test run) it creates duplicate test case work items even though they have the same name, which leads to this situation:
Is there some way to tell the import job to reference the existing testcases to the
newly created test run, instead of creating new ones?
What i have done so far:
yes I checked that the "custom field for test case id" in the "testing > configuration" is configured
yes I checked that the field value is really set in the created test case
The current value in this field is e.g. ".Login" as i don't want the classnames in the report.
YES I still get the same behaviour with the classname set
In the scheduler I have changed the job parameter for the group id because it wasn't filled. New value is: <groupIdRegex>e2e-results-(.*).xml</groupIdRegex>
I checked that no other custom fields are interfering, only the standard fields are set
I checked that no readonly fields are present
I do use a template for the testcases as supported by the xUnitFileImport. The testcases are successfully created and i don't see anything that would interfere
However I do have a hyperlink set in the template (I'll try removing this soon™)
I changed the test run template from "xUnit Build test" to "xUnit Manual Test Upload" this however did not lead to any visible change
I changed the template status from draft to active. Had no change in behaviour.
I tripple checked all the fields in the created test cases. They are literally the same, which leads to the conclusion that no fields in the testcases interfere with referencing to them
After all this time i have invested now, researching on my own and asking on different forums, I am ready to call this a polarion bug unless someone proves me this functionality is working.
I believe you have to set a custom field that identifies the testcase with the xUnit file you're importing, for the importer to identify the testcase.
Try adding a custom field to the TestCase workitem and selecting it here.
Custom Field for Test Case ID option in settings
If you're planning on creating test cases beforehand, note that the ID is formatted form the {classname}.{name} for a given case.

Is there a way to test roblox games?

As I started to understand a little bit more about Roblox, I was wondering if there is any possible way to automate the testing. As a first step only on the Lua scripting, but ideally also simulating the game and interactions.
Is there any way of doing such a thing?
Also if there are already best practices on doing testing on Roblox(this includes Lua scripting) I would like to know more about them.
Unit Testing
For lua modules, I would recommend the library TestEZ. It was developed in-house by Roblox engineers to allow for behavior driven tests. It allows you to specify a location where test files exist and will gives you pretty detailed output as to how your tests did.
This example will run in RobloxStudio, but you can pair it with other libraries like Lemur for command-line and continuous integration workflows. Anyways, follow these steps :
1. Get the TestEZ Library into Roblox Studio
Download Rojo. This program allows you to convert project directories into .rbxm (Roblox model object) files.
Download the TestEZ source code.
Open a Powershell or Terminal window and navigate into the downloaded TestEZ directory.
Build the TestEZ library with this command rojo build --output TestEZ.rbxm .
Make sure that it generated a new file called TestEZ.rbxm in that directory.
Open RobloxStudio to your place.
Drag the newly created TestEZ.rbxm file into the world. It will unpack the library into a ModuleScript with the same name.
Move this ModuleScript somewhere like ReplicatedStorage.
2. Create unit tests
In this step we need to create ModuleScripts with names ending in `.spec` and write tests for our source code.
A common way to structure code is with your code classes in ModuleScripts and their tests right next to them. So let's say you have a simple utility class in a ModuleScript called MathUtil
local MathUtil = {}
function MathUtil.add(a, b)
assert(type(a) == "number")
assert(type(b) == "number")
return a + b
end
return MathUtil
To create tests for this file, create a ModuleScript next to it and call it MathUtil.spec. This naming convention is important, as it allows TestEZ to discover the tests.
return function()
local MathUtil = require(script.parent.MathUtil)
describe("add", function()
it("should verify input", function()
expect(function()
local result = MathUtil.add("1", 2)
end).to.throw()
end)
it("should properly add positive numbers", function()
local result = MathUtil.add(1, 2)
expect(result).to.equal(3)
end)
it("should properly add negative numbers", function()
local result = MathUtil.add(-1, -2)
expect(result).to.equal(-3)
end)
end)
end
For a full breakdown on writing tests with TestEZ, please take a look at the official documentation.
3. Create a test runner
In this step, we need to tell TestEZ where to find our tests. So create a Script in ServerScriptService with this :
local TestEZ = require(game.ReplicatedStorage.TestEZ)
-- add any other root directory folders here that might have tests
local testLocations = {
game.ServerStorage,
}
local reporter = TestEZ.TextReporter
--local reporter = TestEZ.TextReporterQuiet -- use this one if you only want to see failing tests
TestEZ.TestBootstrap:run(testLocations, reporter)
4. Run your tests
Now we can run the game and check the Output window. We should see our tests output :
Test results:
[+] ServerStorage
[+] MathUtil
[+] add
[+] should properly add negative numbers
[+] should properly add positive numbers
[+] should verify input
3 passed, 0 failed, 0 skipped - TextReporter:87
Automation Testing
Unfortunately, there does not exist a way to fully automate the testing of your game.
You can use TestService to create tests that automate the testing of some interactions, like a player touching a kill block or checking bullet paths from guns. But there isn't a publicly exposed way to start your game, record inputs, and validate the game state.
There's an internal service for this, and a non-scriptable service for mocking inputs but without overriding CoreScripts, it's really not possible at this moment in time.

Is it possible to include the same few lines of scenario at the beginning of a new one?

I have a 138 tests long scenario that allows me to test if a workflow works as it should.
However, I'm not a huge fan of having to repeat the same 6 lines that allows me to log in as an administrator, which are:
Given I am on "user/login"
And I fill in "admin#admin.com" for "name"
And I fill in "admin" for "pass"
And I press "Log in"
Then I should get a "200" HTTP response
And I should see "Dashboard"
I repeat this part 6 times so far, and I'm planning on needing to add this a few more time for some other tests.
So my question is the following: is there a way, through the FeatureContext file or any other way, to make these lines repeat?
Thank you in advance
So this is how I did:
Instead of calling Gherkin sentence one after the other, I parsed the vendor/ directory to find examples of how sentences where made.
my function to that connects me looks like the following:
/**
* #throws ElementNotFoundException
* #throws Exception
* #Given I am the administrator
*/
public function iAmTheAdministrator(){
$this->visitPath('/user/login');
$element = $this->getSession()->getPage();
$element->fillField('name', 'admin#admin.com');
$element->fillField('pass', 'admin');
$element->pressButton("Se connecter");
$this->assertSession()->pageTextContains('Dashboard');
}
This is pretty straight forward, and works well
There are 2 ways I know of that repeats steps. One is with background steps and the other involves running snippets of steps.
Background steps run at the start of each scenario.
Background:
Given I have done this
And this other thing
Scenario: Do stuff
When I do this
Then stuff should happen
This only works if all tests have the same starting procedure...
Snippets run whenever you call them which I assume you would rather want
Given I have logged in as an administrator
Step definition:
Given(/^I have logged in as an administrator$/) do
steps %{
Given I am on "user/login"
And I fill in "admin#admin.com" for "name"
And I fill in "admin" for "pass"
And I press "Log in"
Then I should get a "200" HTTP response
And I should see "Dashboard"
}
end
This allows you to use only one step which you can call at any time to run multiple steps
Hope this helps.

How to get discarded builds and the date of build execution from Jenkins API?

I'd like to retrieve from Jenkins api information about all builds of specified job.
I've set in configuration to Discard old builds:
Days to keep builds - 20
Max # of builds to keep - 10
1) Is it possible to get information about builds whatare discarded?
Also I could get following information about each not discarded build:
"builds" : [
{
"_class" : "hudson.model.FreeStyleBuild",
"number" : 34,
"url" : "http://myUrl/view/Myjob/34/"
}
]
I use following url for this: http://MyUrl/view/MyJob/api/json?tree=builds[url,number]&pretty=true
2) Is it possible to get date of build execution?
To discard a build means, well, to discard it, i.e. its information is gone.
I can imagine a solution where you:
collect, store and update(=extend) information about all existing builds on a regular basis
at times you'd like to know which builds have been discarded:
collect the information of all existing builds at this very moment
don't update your store, but ...
create a diff between the current (C) and the information previously stored in your store (S). The builds that are in S but not in C are the discarded builds.

MsTest, DataSourceAttribute - how to get it working with a runtime generated file?

for some test I need to run a data driven test with a configuration that is generated (via reflection) in the ClassInitialize method (by using reflection). I tried out everything, but I just can not get the data source properly set up.
The test takes a list of classes in a csv file (one line per class) and then will test that the mappings to the database work out well (i.e. try to get one item from the database for every entity, which will throw an exception when the table structure does not match).
The testmethod is:
[DataSource(
"Microsoft.VisualStudio.TestTools.DataSource.CSV",
"|DataDirectory|\\EntityMappingsTests.Types.csv",
"EntityMappingsTests.Types#csv",
DataAccessMethod.Sequential)
]
[TestMethod()]
public void TestMappings () {
Obviously the file is EntityMappingsTests.Types.csv. It should be in the DataDirectory.
Now, in the Initialize method (marked with ClassInitialize) I put that together and then try to write it.
WHERE should I write it to? WHERE IS THE DataDirectory?
I tried:
File.WriteAllText(context.TestDeploymentDir + "\\EntityMappingsTests.Types.csv", types.ToString());
File.WriteAllText("EntityMappingsTests.Types.csv", types.ToString());
Both result in "the unit test adapter failed to connect to the data source or read the data". More exact:
Error details: The Microsoft Jet database engine could not find the
object 'EntityMappingsTests.Types.csv'. Make sure the object exists
and that you spell its name and the path name correctly.
So where should I put that file?
I also tried just writing it to the current directory and taking out the DataDirectory part - same result. Sadly, there is limited debugging support here.
Please use the ProcessMonitor tool from technet.microsoft.com/en-us/sysinternals/bb896645. Put a filter on MSTest.exe or the associate qtagent32.exe and find out what locations it is trying to load from and at what point in time in the test loading process. Then please provide an update on those details here .
After you add the CSV file to your VS project, you need to open the properties for it. Set the Property "Copy To Output Directory" to "Copy Always". The DataDirectory defaults to the location of the compiled executable, which runs from the output directory so it will find it there.