Using mocks in Karate DSL feature file with stanalone run - karate

I have REST service, written in language different from Java.
It have few dependencies from other REST services.
For example service under development and testing is A, other services are respectively B and C.
I want to run system test for A, some tests require B or/and C to be online and perform queries from A.
I wrote b-mock.featue and c-mock.feature to represent that services in mock.
Also I wrote some a-test-smth.feature files to run test against A
Is it possible to add some information into a-test-smth.feature to enable some mocks for concrete test?
Now I should run standalone karate.jar twice, first - for mocking. second - for run tests. That approach works, but, I can't ceck that:
some API calls to A not required B or C
can't emulate service B down or for example slow or incorrect response answer fetching
Thanks.

Are you using Java ? If so then the best approach is to perform the set-up of your test in Java code. You can start 2 mocks for B and c and then start the main test for your service A. And at the end do clean-up if needed.
You can refer this as an example: https://github.com/intuit/karate/tree/master/karate-netty#consumer-provider-example
Row 3 shows how you can start a mock and run a Karate test.
If you are not using Java and would like to use only the stand-alone JAR, it is actually possible using Java-interop and quite easy, I just tried it.
EDIT: This API is now built into Karate, so you don't need to write the extra JS code below: https://github.com/intuit/karate/tree/master/karate-netty#within-a-karate-test
(Obsolete)
First create this bit of JavaScript code that is smart enough to start a Karate mock:
function() {
var Mock = Java.type('com.intuit.karate.netty.FeatureServer');
var file = new java.io.File('src/test/java/mock/web/cats-mock.feature');
var server = Mock.start(file, 0, false, null);
return server.port;
}
And this is how it can look in the Background of your main Karate test. You can see how you can do some conditional logic if needed and you have plenty of ways to change things based on your environment.
Background:
* def starter = read('start-mock.js')
* def port = karate.env == 'mock' ? starter() : 8080
* url 'http://localhost:' + port + '/cats'
Does this answer your question ? Let me know and I will add this trick to the documentation !

Related

Is there a way to test roblox games?

As I started to understand a little bit more about Roblox, I was wondering if there is any possible way to automate the testing. As a first step only on the Lua scripting, but ideally also simulating the game and interactions.
Is there any way of doing such a thing?
Also if there are already best practices on doing testing on Roblox(this includes Lua scripting) I would like to know more about them.
Unit Testing
For lua modules, I would recommend the library TestEZ. It was developed in-house by Roblox engineers to allow for behavior driven tests. It allows you to specify a location where test files exist and will gives you pretty detailed output as to how your tests did.
This example will run in RobloxStudio, but you can pair it with other libraries like Lemur for command-line and continuous integration workflows. Anyways, follow these steps :
1. Get the TestEZ Library into Roblox Studio
Download Rojo. This program allows you to convert project directories into .rbxm (Roblox model object) files.
Download the TestEZ source code.
Open a Powershell or Terminal window and navigate into the downloaded TestEZ directory.
Build the TestEZ library with this command rojo build --output TestEZ.rbxm .
Make sure that it generated a new file called TestEZ.rbxm in that directory.
Open RobloxStudio to your place.
Drag the newly created TestEZ.rbxm file into the world. It will unpack the library into a ModuleScript with the same name.
Move this ModuleScript somewhere like ReplicatedStorage.
2. Create unit tests
In this step we need to create ModuleScripts with names ending in `.spec` and write tests for our source code.
A common way to structure code is with your code classes in ModuleScripts and their tests right next to them. So let's say you have a simple utility class in a ModuleScript called MathUtil
local MathUtil = {}
function MathUtil.add(a, b)
assert(type(a) == "number")
assert(type(b) == "number")
return a + b
end
return MathUtil
To create tests for this file, create a ModuleScript next to it and call it MathUtil.spec. This naming convention is important, as it allows TestEZ to discover the tests.
return function()
local MathUtil = require(script.parent.MathUtil)
describe("add", function()
it("should verify input", function()
expect(function()
local result = MathUtil.add("1", 2)
end).to.throw()
end)
it("should properly add positive numbers", function()
local result = MathUtil.add(1, 2)
expect(result).to.equal(3)
end)
it("should properly add negative numbers", function()
local result = MathUtil.add(-1, -2)
expect(result).to.equal(-3)
end)
end)
end
For a full breakdown on writing tests with TestEZ, please take a look at the official documentation.
3. Create a test runner
In this step, we need to tell TestEZ where to find our tests. So create a Script in ServerScriptService with this :
local TestEZ = require(game.ReplicatedStorage.TestEZ)
-- add any other root directory folders here that might have tests
local testLocations = {
game.ServerStorage,
}
local reporter = TestEZ.TextReporter
--local reporter = TestEZ.TextReporterQuiet -- use this one if you only want to see failing tests
TestEZ.TestBootstrap:run(testLocations, reporter)
4. Run your tests
Now we can run the game and check the Output window. We should see our tests output :
Test results:
[+] ServerStorage
[+] MathUtil
[+] add
[+] should properly add negative numbers
[+] should properly add positive numbers
[+] should verify input
3 passed, 0 failed, 0 skipped - TextReporter:87
Automation Testing
Unfortunately, there does not exist a way to fully automate the testing of your game.
You can use TestService to create tests that automate the testing of some interactions, like a player touching a kill block or checking bullet paths from guns. But there isn't a publicly exposed way to start your game, record inputs, and validate the game state.
There's an internal service for this, and a non-scriptable service for mocking inputs but without overriding CoreScripts, it's really not possible at this moment in time.

Running a main-like in a non-main package

We have a package with a fair number of complex tests. As part of the test suite, they run on builds etc.
func TestFunc(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run()
}
Now, for one of these tests, I want to introduce some kind of frontend which will make it possible for me to debug a few things. It's not really a test, but a debug tool. For this, I want to just run the same test but with a Builder pattern:
func TestFuncWithFrontend(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run().WithHTTPFrontend(":9999")
}
The test then would only start if I send a signal via HTTP from the frontend. Basically WithHTTPFrontend() just waits with a channel on a HTTP call from the frontend.
This of course would make the automated tests fail, because no such signal will be sent and execution will hang.
I can't just rename the package to main because the package has 15 files and they are used elsewhere in the system.
Likewise I haven't found a way to run a test only on demand while excluding it from the test suite, so that TestFuncWithFrontend would only run from the commandline - I don't care if with go run or go test or whatever.
I've also thought of ExampleTestFunc() but there's so much output produced by the test it's useless, and without defining Output: ..., the Example won't run.
Unfortunately, there's also a lot of initialization code at (private, i.e. lower case) package level that the test needs. So I can't just create a sub-package main, as a lot of that stuff wouldn't be accessible.
It seems I have three choices:
Export all this initialization variables and code with upper case, so that I could be using it from a sub-main package
Duplicate the whole code.
Move the test into a sub-package main and then have a func main() for the test with Frontend and a _test.go for the normal test, which would have to import a few things from the parent package.
I'd rather like to avoid the second option...And the first is better, but isn't great either IMHO. I think I'll go for the third, but...
am I missing some other option?
You can pass a custom command line argument to go test and start the debug port based on that. Something like this:
package hello_test
import (
"flag"
"log"
"testing"
)
var debugTest bool
func init() {
flag.BoolVar(&debugTest, "debug-test", false, "Setup debugging for tests")
}
func TestHelloWorld(t *testing.T) {
if debugTest {
log.Println("Starting debug port for test...")
// Start the server here
}
log.Println("Done")
}
Then if you want to run just that specific test, go test -debug-test -run '^TestHelloWorld$' ./.
Alternatively it's also possible to set a custom environment variable that you check in the test function to change behaviour.
I finally found an acceptable option. This answer
Skip some tests with go test
brought me to the right track.
Essentially using build tags which would not be present in normal builds but which I can provide when executing manually.

Code coverage in SimpleTest

Is there any way to generate code coverage report when using SimpleTest similar to PHPUnit.
I have read the documentation of SimpleTest on their website but can not find a clear way on how to do it!
I came across this website that says
we can add require_once (dirname(__FILE__).'/coverage.php')
to the intended file and it should generate the report, but it did not work!
If there is a helpful website on how to generate code coverage, please share it here.
Thanks alot.
I could not get it to work in the officially supported way either, but here is something I got working that I was able to hack together by examining their code. This works for v1.1.7 of SimpleTest, not their master code. At the time of this writing v1.1.7 is the latest release, and works with new versions of PHP 7, even though it is an old release.
First off you have to make sure you have Xdebug installed, configured, and working. On my system there is both a CLI and Apache version of the php.ini file that have to be configured properly depending on if I am trying to use PHP through Apache or just directly from the terminal. There are alternatives to Xdebug, but most people us Xdebug.
Then, you have to make the PHP_CodeCoverage library accessible from your code. I recommend adding it to your project as a composer package.
Now you just have to manually use that library to capture code coverage and generate a report. How exactly you do that will depend on how you run your tests. Personally, I run my tests on the terminal, and I have a bootstrap file that php runs before it starts the script. At the end of the bootstrap file, I include the SimpleTest autorun file so it will automatically run the tests in any test classes that get included like so:
require_once __DIR__.'/vendor/simpletest/simpletest/autorun.php';
Somewhere inside your bootstrap file you will need to create a filter, whitelist the directories and files you want to get reported, create a coverage object and pass in the filter to the constructor, start coverage, and create and register a shutdown function that will change the way SimpleTest executes the tests to make sure it also stops the coverage and generates the coverage report. Your bootstrap file might look something like this:
<?php
require __DIR__.'/vendor/autoload.php';
$filter = new \SebastianBergmann\CodeCoverage\Filter();
$filter->addDirectoryToWhitelist(__DIR__."/src/");
$coverage = new \SebastianBergmann\CodeCoverage\CodeCoverage(null, $filter);
$coverage->start('<name of test>');
function shutdownWithCoverage($coverage)
{
$autorun = function_exists('\run_local_tests'); // provided by simpletest
if ($autorun) {
$result = \run_local_tests(); // this actually runs the tests
}
$coverage->stop();
$writer = new \SebastianBergmann\CodeCoverage\Report\Html\Facade;
$writer->process($coverage, __DIR__.'/tmp/code-coverage-report');
if ($autorun) {
// prevent tests from running twice:
exit($result ? 0 : 1);
}
}
register_shutdown_function('\shutdownWithCoverage', $coverage);
require_once __DIR__.'/vendor/simpletest/simpletest/autorun.php';
It took me some time to figure out, as - to put it mildly - the documentation for this feature is not really complete.
Once you have your test suite up and running, just include these lines before the lines that are actually running it:
require_once ('simpletest/extensions/coverage/coverage.php');
require_once ('simpletest/extensions/coverage/coverage_reporter.php');
$coverage = new CodeCoverage();
$coverage->log = 'coverage/log.sqlite'; // This folder should exist
$coverage->includes = ['.*\.php$']; // Modify these as you wish
$coverage->excludes = ['simpletest.*']; // Or it is even better to use a setting file
$coverage->maxDirectoryDepth = '1';
$coverage->resetLog();
$coverage->startCoverage();
Then run your tests, for instance:
$test = new ProjectTests(); //It is an extension of the class TestSuite
$test->run(new HtmlReporter());
Finally generate your reports
$coverage->stopCoverage();
$coverage->writeUntouched();
$handler = new CoverageDataHandler($coverage->log);
$report = new CoverageReporter();
$report->reportDir = 'coverage/report'; // This folder should exist
$report->title = 'Code Coverage Report';
$report->coverage = $handler->read();
$report->untouched = $handler->readUntouchedFiles();
$report->summaryFile = $report->reportDir . '/index.html';
And that's it. Based on your setup, you might need to make some small adjustment to make it work. For instance, if you are using the autorun.php from simpletest, that might be a bit more tricky.

How to run an specific test case in the selected environment in SoapUI

I have multiple Environment and a lot of test cases, but not all test cases are needed to be run in all environment. Is there a way to run only an specific test cases from a test suite based on the selected Environment.
For Example
If I select Environment1, it will run the following test cases
TC0001
TC0002
TC0003
TC0004
TC0005
If I select Environment2, it will run only the following test cases
TC0001
TC0003
TC0005
There can be different solution to achieve this since you have multiple environments i.e., pro software being used.
I would achieve the solution using Test Suite's Setup Script:
Create Test Suite level custom property. Use the same name as your environment name. For instance, DEV is the environment defined, use the same as test suite property name and provide the list of values separated by comma as value for that property, say TC1, TC2 etc.,
Similarly defined other environments and its values as well.
Copy the below script in Setup Script for the test suite and execute the script which enables or disables the test cases according to the environment and property value
Test Suite's Setup Script
/**
* This is soapui's Setup Script
* which enables / disables required
* test cases based on the user list
* for that specific environment
**/
def disableTestCase(testCaze) {
testCaze.disabled = true
}
def enableTestCase(testCaze) {
testCaze.disabled = false
}
def getEnvironmentSpecificList(def testSuite) {
def currentEnv = testSuite.project.activeEnvironment.NAME
def enableList = testSuite.getPropertyValue(currentEnv).split(',').collect { it.trim()}
log.info "List of test for enable: ${enableList}"
enableList
}
def userList = getEnvironmentSpecificList(testSuite)
testSuite.testCaseList.each { kase ->
if (userList.contains(kase.name)) {
enableTestCase(kase)
} else {
disableTestCase(kase)
}
}
Other way to achieve this is using Event feature of ReadyAPI, you may use TestRunListener.beforeRun() and filter the test case whether to execute or ignore.
EDIT:
If you are using ReadyAPI, then you can the new feature called tag the test cases. A test case can be tagged with multiple values and you can execute tests using specific tags. In this case, you may not needed to have the setup script as that is for the open source edition. Refer documentation for more details.
This solution is only specific to Pro software and Open Source edition does have this tag feature.

Retrieve response from a "Run Test Step", using SoapUI/ Groovy?

In SoapUI, I have a host Test Case, which executes another external Test Case (with several test steps) using the "Run Test Case" test step. I need to access a response from the external TC from within my host TC, since I need to assert on some values.
I cannot transfer the properties since they are in XML. Could I get some pointers as to how I could leverage Groovy/SoapUI for this.
For Response you can use the below code.
testRunner.testCase.getTestStepByName("test step").testRequest.response.responseContent
In you external TC create another property and at the end of the TC use Transfer Property step to transfer your XML node to it. In your host TC, just read that property as you would any other.
I also had a look around to see if this can be done from Groovy. SoapUI documentation says that you need to refer to the external name of the testsuite / testcase:
def tc = testRunner.testCase.testSuite.project.testSuites["external TestSuite"].testCases["external TestCase"]
def ts = tc.testSteps["test step"]
But I could not find how to get at the Response after that.
In addition to Guest and SiKing answers, I share a solution to a problem that I've met:
If your step is not of type 'request' but 'calltestcase' you cannot use Guest answer.
I have a lot of requests contained each in a testCase and my other testCases call these testCases each time I need to launch a request.
I configured my request testCases in order to return the response as a custom property that I call "testResponse" so I can easily access it from my other testCases.
I met a problem in the following configuration :
I have a "calltestcase" step that gives me a request result.
Further in the test I have a groovy script that needs to call this step and get the response value
If I use this solution :
testRunner.runTestStepByName("test step")
followed by testRunner.testCase.getTestStepByName("test step").testRequest.response.responseContent
I'm stuck as there is no testRequest property for the class.
The solution that works is :
testRunner.runTestStepByName("test step")
def response_value = context.expand( '${test step#testResponse#$[\'value\']}' )
another solution is :
testRunner.runTestStepByName("test step")
tStep = testRunner.testCase.getTestStepByName("test step")
response = tStep.getPropertyValue("testResponse")
Then I extract the relevant value from 'response' (in my case, it is a json that I have to parse).
Of course it works only because I the request response as a custom property of my request test case.
I hope I was clear enough