With Common Test test suites, it looks like test cases must be 1:1 with atoms that correspond to top-level functions in the suite. Is this true?
In that case, how can I dynamically generate test cases?
In particular, I want to read a directory, and then, (in parallel) for each file in the directory, do stuff with the file and then compare against a snapshot.
I got the parallelization I wanted with rpc:pmap, but what I don't like is that the entire test case fails on the first bad assert. I want to see what happens with all the files, every time. Is there a way to do this?
Short answer: No.
Long answer: No. I even tried using Ghost Functions
-module(my_test_SUITE).
-export [all/0].
-export [has_files/1].
-export ['$handle_undefined_function'/2].
all() -> [has_files | files() ].
has_files(_) ->
case files() of
[] -> ct:fail("No files in ~s", [element(2, file:get_cwd())]);
_ -> ok
end.
files() ->
[to_atom(AsString) || AsString <- filelib:wildcard("../../lib/exercism/test/*.test")].
to_atom(AsString) ->
list_to_atom(filename:basename(filename:rootname(AsString))).
'$handle_undefined_function'(Func, [_]) ->
Func = file:consult(Func).
And… as soon as I add the undefined function handler, rebar3 ct start reporting…
All 0 tests passed.
Clearly common test is also using the fact that some functions are undefined to work. 🤷♂️
Data Directory
Each common test suite can have a "data" directory. This directory can contain anything you want. For example, a test suite mytest_SUITE, can have mytest_SUITE_data/ "data" directory. The path to data directory can be obtained from the Config parameter in test cases.
someTest(Config) ->
DataDir = ?config(data_dir, Config),
%% TODO: do something with DataDir
?assert(false). %% include eunit header file for this to work
Running tests in parallel
To run tests in parallel you need to use groups. Add a group/0 function to the test suite
groups() -> ListOfGroups.
Each member in ListOfGroups is a tuple, {Name, Props, Members}. Name is an atom, Props is list of properties for the groups, and Members is a list of test cases in the group. Setting Props to [parallel|OtherProps] will enable the test cases in the group to be executed in parallel.
Dynamic Test Cases
Checkout cucumberl project.
Related
As I started to understand a little bit more about Roblox, I was wondering if there is any possible way to automate the testing. As a first step only on the Lua scripting, but ideally also simulating the game and interactions.
Is there any way of doing such a thing?
Also if there are already best practices on doing testing on Roblox(this includes Lua scripting) I would like to know more about them.
Unit Testing
For lua modules, I would recommend the library TestEZ. It was developed in-house by Roblox engineers to allow for behavior driven tests. It allows you to specify a location where test files exist and will gives you pretty detailed output as to how your tests did.
This example will run in RobloxStudio, but you can pair it with other libraries like Lemur for command-line and continuous integration workflows. Anyways, follow these steps :
1. Get the TestEZ Library into Roblox Studio
Download Rojo. This program allows you to convert project directories into .rbxm (Roblox model object) files.
Download the TestEZ source code.
Open a Powershell or Terminal window and navigate into the downloaded TestEZ directory.
Build the TestEZ library with this command rojo build --output TestEZ.rbxm .
Make sure that it generated a new file called TestEZ.rbxm in that directory.
Open RobloxStudio to your place.
Drag the newly created TestEZ.rbxm file into the world. It will unpack the library into a ModuleScript with the same name.
Move this ModuleScript somewhere like ReplicatedStorage.
2. Create unit tests
In this step we need to create ModuleScripts with names ending in `.spec` and write tests for our source code.
A common way to structure code is with your code classes in ModuleScripts and their tests right next to them. So let's say you have a simple utility class in a ModuleScript called MathUtil
local MathUtil = {}
function MathUtil.add(a, b)
assert(type(a) == "number")
assert(type(b) == "number")
return a + b
end
return MathUtil
To create tests for this file, create a ModuleScript next to it and call it MathUtil.spec. This naming convention is important, as it allows TestEZ to discover the tests.
return function()
local MathUtil = require(script.parent.MathUtil)
describe("add", function()
it("should verify input", function()
expect(function()
local result = MathUtil.add("1", 2)
end).to.throw()
end)
it("should properly add positive numbers", function()
local result = MathUtil.add(1, 2)
expect(result).to.equal(3)
end)
it("should properly add negative numbers", function()
local result = MathUtil.add(-1, -2)
expect(result).to.equal(-3)
end)
end)
end
For a full breakdown on writing tests with TestEZ, please take a look at the official documentation.
3. Create a test runner
In this step, we need to tell TestEZ where to find our tests. So create a Script in ServerScriptService with this :
local TestEZ = require(game.ReplicatedStorage.TestEZ)
-- add any other root directory folders here that might have tests
local testLocations = {
game.ServerStorage,
}
local reporter = TestEZ.TextReporter
--local reporter = TestEZ.TextReporterQuiet -- use this one if you only want to see failing tests
TestEZ.TestBootstrap:run(testLocations, reporter)
4. Run your tests
Now we can run the game and check the Output window. We should see our tests output :
Test results:
[+] ServerStorage
[+] MathUtil
[+] add
[+] should properly add negative numbers
[+] should properly add positive numbers
[+] should verify input
3 passed, 0 failed, 0 skipped - TextReporter:87
Automation Testing
Unfortunately, there does not exist a way to fully automate the testing of your game.
You can use TestService to create tests that automate the testing of some interactions, like a player touching a kill block or checking bullet paths from guns. But there isn't a publicly exposed way to start your game, record inputs, and validate the game state.
There's an internal service for this, and a non-scriptable service for mocking inputs but without overriding CoreScripts, it's really not possible at this moment in time.
I´m currently writing tests for my application and therefore, I have to test some click.group commands I defined:
Let´s say I defined them like:
#click.group(cls=MyGroup)
#click.pass_context
def myapp(ctx):
init_stuff()
#myapp.command()
#click.option('--myOption')
def foo(myOption: str) -> None:
do_stuff() # change some files, print, create other files
I know that I could use the CliRunner from click.testing. However, I just want to make sure, that the command is called, but I DONT WANT it to execute any code (for example by applying the CliRunner.invoke()).
How could this be done?
I couldn´t come up with a solution using mocking with foo for example. Or do I have to execute code lets say using the isolated_filesystem() which CliRunner provides?
So the question is: What would be the most efficient way to test my commands when defined like shown above?
Many thanks in advance
You could add a --dry-run flag to your group or some commands, and save it it inside the context, and if the flag is enabled, do not execute any code. Then you can use CliRunner.invoke() with the --dry-run flag enabled and just check your invocations have happened, without actually executing the code.
We have a package with a fair number of complex tests. As part of the test suite, they run on builds etc.
func TestFunc(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run()
}
Now, for one of these tests, I want to introduce some kind of frontend which will make it possible for me to debug a few things. It's not really a test, but a debug tool. For this, I want to just run the same test but with a Builder pattern:
func TestFuncWithFrontend(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run().WithHTTPFrontend(":9999")
}
The test then would only start if I send a signal via HTTP from the frontend. Basically WithHTTPFrontend() just waits with a channel on a HTTP call from the frontend.
This of course would make the automated tests fail, because no such signal will be sent and execution will hang.
I can't just rename the package to main because the package has 15 files and they are used elsewhere in the system.
Likewise I haven't found a way to run a test only on demand while excluding it from the test suite, so that TestFuncWithFrontend would only run from the commandline - I don't care if with go run or go test or whatever.
I've also thought of ExampleTestFunc() but there's so much output produced by the test it's useless, and without defining Output: ..., the Example won't run.
Unfortunately, there's also a lot of initialization code at (private, i.e. lower case) package level that the test needs. So I can't just create a sub-package main, as a lot of that stuff wouldn't be accessible.
It seems I have three choices:
Export all this initialization variables and code with upper case, so that I could be using it from a sub-main package
Duplicate the whole code.
Move the test into a sub-package main and then have a func main() for the test with Frontend and a _test.go for the normal test, which would have to import a few things from the parent package.
I'd rather like to avoid the second option...And the first is better, but isn't great either IMHO. I think I'll go for the third, but...
am I missing some other option?
You can pass a custom command line argument to go test and start the debug port based on that. Something like this:
package hello_test
import (
"flag"
"log"
"testing"
)
var debugTest bool
func init() {
flag.BoolVar(&debugTest, "debug-test", false, "Setup debugging for tests")
}
func TestHelloWorld(t *testing.T) {
if debugTest {
log.Println("Starting debug port for test...")
// Start the server here
}
log.Println("Done")
}
Then if you want to run just that specific test, go test -debug-test -run '^TestHelloWorld$' ./.
Alternatively it's also possible to set a custom environment variable that you check in the test function to change behaviour.
I finally found an acceptable option. This answer
Skip some tests with go test
brought me to the right track.
Essentially using build tags which would not be present in normal builds but which I can provide when executing manually.
Title: Different parameters in TFS test with shared steps.
I have a situation where I write a (manual) testcases in TFS (Server Version 15.117.27024.0)
I created a shared step with 2 parameter values.
How do I call the same shared step, with different parameter values, from within the same testcase?
in pseudo-code:
test_case (
shared_steps('param1','param2');
shared_steps('param3','param4');
step3();
step4();
)
From the web interface, and various (old) blogpost, it seems like this is not possible, if that is indeed the case, I would like to have that verified.
No, it's not possible.
We can only open the shared steps (double click the shared step) to see the parameter values within the same test case.
I'm making a library with a module that when use'd injects some functions dependant on the contents of a directory, and I want to test the behaviour with different directories. Currently I get the path to the directory through application config with Application.get_env/3.
If I'm changing the directory Application.put_env/4 it means my tests have to run sequentially as this is effective a global value, correct?
Can I stub out the call to Application.get_env/3? Or should I be passing in the value in another way? (such as via the use macro)
The simplest way is to pass in the value as an argument. Your module could depend on Application.get_env only absent a passed in value. Something like:
Application.put_env(MyApplication, :some_key, "hello")
defmodule Test do
def test(string \\ Application.get_env(MyApplication, :some_key)) do
IO.inspect(string)
end
end
# Default behaviour
Test.test # => "hello"
# In your tests
Test.test("world") # => "world"