Is there a way to test roblox games? - testing

As I started to understand a little bit more about Roblox, I was wondering if there is any possible way to automate the testing. As a first step only on the Lua scripting, but ideally also simulating the game and interactions.
Is there any way of doing such a thing?
Also if there are already best practices on doing testing on Roblox(this includes Lua scripting) I would like to know more about them.

Unit Testing
For lua modules, I would recommend the library TestEZ. It was developed in-house by Roblox engineers to allow for behavior driven tests. It allows you to specify a location where test files exist and will gives you pretty detailed output as to how your tests did.
This example will run in RobloxStudio, but you can pair it with other libraries like Lemur for command-line and continuous integration workflows. Anyways, follow these steps :
1. Get the TestEZ Library into Roblox Studio
Download Rojo. This program allows you to convert project directories into .rbxm (Roblox model object) files.
Download the TestEZ source code.
Open a Powershell or Terminal window and navigate into the downloaded TestEZ directory.
Build the TestEZ library with this command rojo build --output TestEZ.rbxm .
Make sure that it generated a new file called TestEZ.rbxm in that directory.
Open RobloxStudio to your place.
Drag the newly created TestEZ.rbxm file into the world. It will unpack the library into a ModuleScript with the same name.
Move this ModuleScript somewhere like ReplicatedStorage.
2. Create unit tests
In this step we need to create ModuleScripts with names ending in `.spec` and write tests for our source code.
A common way to structure code is with your code classes in ModuleScripts and their tests right next to them. So let's say you have a simple utility class in a ModuleScript called MathUtil
local MathUtil = {}
function MathUtil.add(a, b)
assert(type(a) == "number")
assert(type(b) == "number")
return a + b
end
return MathUtil
To create tests for this file, create a ModuleScript next to it and call it MathUtil.spec. This naming convention is important, as it allows TestEZ to discover the tests.
return function()
local MathUtil = require(script.parent.MathUtil)
describe("add", function()
it("should verify input", function()
expect(function()
local result = MathUtil.add("1", 2)
end).to.throw()
end)
it("should properly add positive numbers", function()
local result = MathUtil.add(1, 2)
expect(result).to.equal(3)
end)
it("should properly add negative numbers", function()
local result = MathUtil.add(-1, -2)
expect(result).to.equal(-3)
end)
end)
end
For a full breakdown on writing tests with TestEZ, please take a look at the official documentation.
3. Create a test runner
In this step, we need to tell TestEZ where to find our tests. So create a Script in ServerScriptService with this :
local TestEZ = require(game.ReplicatedStorage.TestEZ)
-- add any other root directory folders here that might have tests
local testLocations = {
game.ServerStorage,
}
local reporter = TestEZ.TextReporter
--local reporter = TestEZ.TextReporterQuiet -- use this one if you only want to see failing tests
TestEZ.TestBootstrap:run(testLocations, reporter)
4. Run your tests
Now we can run the game and check the Output window. We should see our tests output :
Test results:
[+] ServerStorage
[+] MathUtil
[+] add
[+] should properly add negative numbers
[+] should properly add positive numbers
[+] should verify input
3 passed, 0 failed, 0 skipped - TextReporter:87
Automation Testing
Unfortunately, there does not exist a way to fully automate the testing of your game.
You can use TestService to create tests that automate the testing of some interactions, like a player touching a kill block or checking bullet paths from guns. But there isn't a publicly exposed way to start your game, record inputs, and validate the game state.
There's an internal service for this, and a non-scriptable service for mocking inputs but without overriding CoreScripts, it's really not possible at this moment in time.

Related

How can I dynamically generate test cases with common test?

With Common Test test suites, it looks like test cases must be 1:1 with atoms that correspond to top-level functions in the suite. Is this true?
In that case, how can I dynamically generate test cases?
In particular, I want to read a directory, and then, (in parallel) for each file in the directory, do stuff with the file and then compare against a snapshot.
I got the parallelization I wanted with rpc:pmap, but what I don't like is that the entire test case fails on the first bad assert. I want to see what happens with all the files, every time. Is there a way to do this?
Short answer: No.
Long answer: No. I even tried using Ghost Functions
-module(my_test_SUITE).
-export [all/0].
-export [has_files/1].
-export ['$handle_undefined_function'/2].
all() -> [has_files | files() ].
has_files(_) ->
case files() of
[] -> ct:fail("No files in ~s", [element(2, file:get_cwd())]);
_ -> ok
end.
files() ->
[to_atom(AsString) || AsString <- filelib:wildcard("../../lib/exercism/test/*.test")].
to_atom(AsString) ->
list_to_atom(filename:basename(filename:rootname(AsString))).
'$handle_undefined_function'(Func, [_]) ->
Func = file:consult(Func).
And… as soon as I add the undefined function handler, rebar3 ct start reporting…
All 0 tests passed.
Clearly common test is also using the fact that some functions are undefined to work. 🤷‍♂️
Data Directory
Each common test suite can have a "data" directory. This directory can contain anything you want. For example, a test suite mytest_SUITE, can have mytest_SUITE_data/ "data" directory. The path to data directory can be obtained from the Config parameter in test cases.
someTest(Config) ->
DataDir = ?config(data_dir, Config),
%% TODO: do something with DataDir
?assert(false). %% include eunit header file for this to work
Running tests in parallel
To run tests in parallel you need to use groups. Add a group/0 function to the test suite
groups() -> ListOfGroups.
Each member in ListOfGroups is a tuple, {Name, Props, Members}. Name is an atom, Props is list of properties for the groups, and Members is a list of test cases in the group. Setting Props to [parallel|OtherProps] will enable the test cases in the group to be executed in parallel.
Dynamic Test Cases
Checkout cucumberl project.

Using mocks in Karate DSL feature file with stanalone run

I have REST service, written in language different from Java.
It have few dependencies from other REST services.
For example service under development and testing is A, other services are respectively B and C.
I want to run system test for A, some tests require B or/and C to be online and perform queries from A.
I wrote b-mock.featue and c-mock.feature to represent that services in mock.
Also I wrote some a-test-smth.feature files to run test against A
Is it possible to add some information into a-test-smth.feature to enable some mocks for concrete test?
Now I should run standalone karate.jar twice, first - for mocking. second - for run tests. That approach works, but, I can't ceck that:
some API calls to A not required B or C
can't emulate service B down or for example slow or incorrect response answer fetching
Thanks.
Are you using Java ? If so then the best approach is to perform the set-up of your test in Java code. You can start 2 mocks for B and c and then start the main test for your service A. And at the end do clean-up if needed.
You can refer this as an example: https://github.com/intuit/karate/tree/master/karate-netty#consumer-provider-example
Row 3 shows how you can start a mock and run a Karate test.
If you are not using Java and would like to use only the stand-alone JAR, it is actually possible using Java-interop and quite easy, I just tried it.
EDIT: This API is now built into Karate, so you don't need to write the extra JS code below: https://github.com/intuit/karate/tree/master/karate-netty#within-a-karate-test
(Obsolete)
First create this bit of JavaScript code that is smart enough to start a Karate mock:
function() {
var Mock = Java.type('com.intuit.karate.netty.FeatureServer');
var file = new java.io.File('src/test/java/mock/web/cats-mock.feature');
var server = Mock.start(file, 0, false, null);
return server.port;
}
And this is how it can look in the Background of your main Karate test. You can see how you can do some conditional logic if needed and you have plenty of ways to change things based on your environment.
Background:
* def starter = read('start-mock.js')
* def port = karate.env == 'mock' ? starter() : 8080
* url 'http://localhost:' + port + '/cats'
Does this answer your question ? Let me know and I will add this trick to the documentation !

Running a main-like in a non-main package

We have a package with a fair number of complex tests. As part of the test suite, they run on builds etc.
func TestFunc(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run()
}
Now, for one of these tests, I want to introduce some kind of frontend which will make it possible for me to debug a few things. It's not really a test, but a debug tool. For this, I want to just run the same test but with a Builder pattern:
func TestFuncWithFrontend(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run().WithHTTPFrontend(":9999")
}
The test then would only start if I send a signal via HTTP from the frontend. Basically WithHTTPFrontend() just waits with a channel on a HTTP call from the frontend.
This of course would make the automated tests fail, because no such signal will be sent and execution will hang.
I can't just rename the package to main because the package has 15 files and they are used elsewhere in the system.
Likewise I haven't found a way to run a test only on demand while excluding it from the test suite, so that TestFuncWithFrontend would only run from the commandline - I don't care if with go run or go test or whatever.
I've also thought of ExampleTestFunc() but there's so much output produced by the test it's useless, and without defining Output: ..., the Example won't run.
Unfortunately, there's also a lot of initialization code at (private, i.e. lower case) package level that the test needs. So I can't just create a sub-package main, as a lot of that stuff wouldn't be accessible.
It seems I have three choices:
Export all this initialization variables and code with upper case, so that I could be using it from a sub-main package
Duplicate the whole code.
Move the test into a sub-package main and then have a func main() for the test with Frontend and a _test.go for the normal test, which would have to import a few things from the parent package.
I'd rather like to avoid the second option...And the first is better, but isn't great either IMHO. I think I'll go for the third, but...
am I missing some other option?
You can pass a custom command line argument to go test and start the debug port based on that. Something like this:
package hello_test
import (
"flag"
"log"
"testing"
)
var debugTest bool
func init() {
flag.BoolVar(&debugTest, "debug-test", false, "Setup debugging for tests")
}
func TestHelloWorld(t *testing.T) {
if debugTest {
log.Println("Starting debug port for test...")
// Start the server here
}
log.Println("Done")
}
Then if you want to run just that specific test, go test -debug-test -run '^TestHelloWorld$' ./.
Alternatively it's also possible to set a custom environment variable that you check in the test function to change behaviour.
I finally found an acceptable option. This answer
Skip some tests with go test
brought me to the right track.
Essentially using build tags which would not be present in normal builds but which I can provide when executing manually.

Code coverage in SimpleTest

Is there any way to generate code coverage report when using SimpleTest similar to PHPUnit.
I have read the documentation of SimpleTest on their website but can not find a clear way on how to do it!
I came across this website that says
we can add require_once (dirname(__FILE__).'/coverage.php')
to the intended file and it should generate the report, but it did not work!
If there is a helpful website on how to generate code coverage, please share it here.
Thanks alot.
I could not get it to work in the officially supported way either, but here is something I got working that I was able to hack together by examining their code. This works for v1.1.7 of SimpleTest, not their master code. At the time of this writing v1.1.7 is the latest release, and works with new versions of PHP 7, even though it is an old release.
First off you have to make sure you have Xdebug installed, configured, and working. On my system there is both a CLI and Apache version of the php.ini file that have to be configured properly depending on if I am trying to use PHP through Apache or just directly from the terminal. There are alternatives to Xdebug, but most people us Xdebug.
Then, you have to make the PHP_CodeCoverage library accessible from your code. I recommend adding it to your project as a composer package.
Now you just have to manually use that library to capture code coverage and generate a report. How exactly you do that will depend on how you run your tests. Personally, I run my tests on the terminal, and I have a bootstrap file that php runs before it starts the script. At the end of the bootstrap file, I include the SimpleTest autorun file so it will automatically run the tests in any test classes that get included like so:
require_once __DIR__.'/vendor/simpletest/simpletest/autorun.php';
Somewhere inside your bootstrap file you will need to create a filter, whitelist the directories and files you want to get reported, create a coverage object and pass in the filter to the constructor, start coverage, and create and register a shutdown function that will change the way SimpleTest executes the tests to make sure it also stops the coverage and generates the coverage report. Your bootstrap file might look something like this:
<?php
require __DIR__.'/vendor/autoload.php';
$filter = new \SebastianBergmann\CodeCoverage\Filter();
$filter->addDirectoryToWhitelist(__DIR__."/src/");
$coverage = new \SebastianBergmann\CodeCoverage\CodeCoverage(null, $filter);
$coverage->start('<name of test>');
function shutdownWithCoverage($coverage)
{
$autorun = function_exists('\run_local_tests'); // provided by simpletest
if ($autorun) {
$result = \run_local_tests(); // this actually runs the tests
}
$coverage->stop();
$writer = new \SebastianBergmann\CodeCoverage\Report\Html\Facade;
$writer->process($coverage, __DIR__.'/tmp/code-coverage-report');
if ($autorun) {
// prevent tests from running twice:
exit($result ? 0 : 1);
}
}
register_shutdown_function('\shutdownWithCoverage', $coverage);
require_once __DIR__.'/vendor/simpletest/simpletest/autorun.php';
It took me some time to figure out, as - to put it mildly - the documentation for this feature is not really complete.
Once you have your test suite up and running, just include these lines before the lines that are actually running it:
require_once ('simpletest/extensions/coverage/coverage.php');
require_once ('simpletest/extensions/coverage/coverage_reporter.php');
$coverage = new CodeCoverage();
$coverage->log = 'coverage/log.sqlite'; // This folder should exist
$coverage->includes = ['.*\.php$']; // Modify these as you wish
$coverage->excludes = ['simpletest.*']; // Or it is even better to use a setting file
$coverage->maxDirectoryDepth = '1';
$coverage->resetLog();
$coverage->startCoverage();
Then run your tests, for instance:
$test = new ProjectTests(); //It is an extension of the class TestSuite
$test->run(new HtmlReporter());
Finally generate your reports
$coverage->stopCoverage();
$coverage->writeUntouched();
$handler = new CoverageDataHandler($coverage->log);
$report = new CoverageReporter();
$report->reportDir = 'coverage/report'; // This folder should exist
$report->title = 'Code Coverage Report';
$report->coverage = $handler->read();
$report->untouched = $handler->readUntouchedFiles();
$report->summaryFile = $report->reportDir . '/index.html';
And that's it. Based on your setup, you might need to make some small adjustment to make it work. For instance, if you are using the autorun.php from simpletest, that might be a bit more tricky.

Integrating RFT Test framework to work with RQM

I designed a framework in RFT where the test cases are written in spreadsheet specifying the data source, object and keyword and a driver script which processes through all this data and routes it to the appropriate method for each test step all in a spreadsheet. Now I want to integrate this with RQM so that each of my test cases in the spreadsheet is shown as passed/failed in RQM. Any ideas?
You could implement now an algorithm to read those testcases in the spreadsheet and pass them to RQM as attachments with logTestResult.
For example:
logTestResult( <your attachment> , true );
And if you are already connected to RQM the adapter will attach files that you indicate automatically to RQM. So, at the end you will see step by step the results and if the script ends correctly RQM will show you the script as "passed".
Thanks for the answer Juan. I solved this by passing the testcase name from Script Argument part of RQM and fetching the arguments in my starter script as shown below:-
public void testMain(Object[] args) throws Exception
{
String n=args[0].toString();
logInfo("Parameter from RQM"+n);
ModuleDriver d=new ModuleDriver();
d.execute_main(n);
}
Since I have verification points setup for each of the steps in my test cases the results get reported based on each of those verification points in RQM which is what i needed.