Running a main-like in a non-main package - testing

We have a package with a fair number of complex tests. As part of the test suite, they run on builds etc.
func TestFunc(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run()
}
Now, for one of these tests, I want to introduce some kind of frontend which will make it possible for me to debug a few things. It's not really a test, but a debug tool. For this, I want to just run the same test but with a Builder pattern:
func TestFuncWithFrontend(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run().WithHTTPFrontend(":9999")
}
The test then would only start if I send a signal via HTTP from the frontend. Basically WithHTTPFrontend() just waits with a channel on a HTTP call from the frontend.
This of course would make the automated tests fail, because no such signal will be sent and execution will hang.
I can't just rename the package to main because the package has 15 files and they are used elsewhere in the system.
Likewise I haven't found a way to run a test only on demand while excluding it from the test suite, so that TestFuncWithFrontend would only run from the commandline - I don't care if with go run or go test or whatever.
I've also thought of ExampleTestFunc() but there's so much output produced by the test it's useless, and without defining Output: ..., the Example won't run.
Unfortunately, there's also a lot of initialization code at (private, i.e. lower case) package level that the test needs. So I can't just create a sub-package main, as a lot of that stuff wouldn't be accessible.
It seems I have three choices:
Export all this initialization variables and code with upper case, so that I could be using it from a sub-main package
Duplicate the whole code.
Move the test into a sub-package main and then have a func main() for the test with Frontend and a _test.go for the normal test, which would have to import a few things from the parent package.
I'd rather like to avoid the second option...And the first is better, but isn't great either IMHO. I think I'll go for the third, but...
am I missing some other option?

You can pass a custom command line argument to go test and start the debug port based on that. Something like this:
package hello_test
import (
"flag"
"log"
"testing"
)
var debugTest bool
func init() {
flag.BoolVar(&debugTest, "debug-test", false, "Setup debugging for tests")
}
func TestHelloWorld(t *testing.T) {
if debugTest {
log.Println("Starting debug port for test...")
// Start the server here
}
log.Println("Done")
}
Then if you want to run just that specific test, go test -debug-test -run '^TestHelloWorld$' ./.
Alternatively it's also possible to set a custom environment variable that you check in the test function to change behaviour.

I finally found an acceptable option. This answer
Skip some tests with go test
brought me to the right track.
Essentially using build tags which would not be present in normal builds but which I can provide when executing manually.

Related

Is there a way to test roblox games?

As I started to understand a little bit more about Roblox, I was wondering if there is any possible way to automate the testing. As a first step only on the Lua scripting, but ideally also simulating the game and interactions.
Is there any way of doing such a thing?
Also if there are already best practices on doing testing on Roblox(this includes Lua scripting) I would like to know more about them.
Unit Testing
For lua modules, I would recommend the library TestEZ. It was developed in-house by Roblox engineers to allow for behavior driven tests. It allows you to specify a location where test files exist and will gives you pretty detailed output as to how your tests did.
This example will run in RobloxStudio, but you can pair it with other libraries like Lemur for command-line and continuous integration workflows. Anyways, follow these steps :
1. Get the TestEZ Library into Roblox Studio
Download Rojo. This program allows you to convert project directories into .rbxm (Roblox model object) files.
Download the TestEZ source code.
Open a Powershell or Terminal window and navigate into the downloaded TestEZ directory.
Build the TestEZ library with this command rojo build --output TestEZ.rbxm .
Make sure that it generated a new file called TestEZ.rbxm in that directory.
Open RobloxStudio to your place.
Drag the newly created TestEZ.rbxm file into the world. It will unpack the library into a ModuleScript with the same name.
Move this ModuleScript somewhere like ReplicatedStorage.
2. Create unit tests
In this step we need to create ModuleScripts with names ending in `.spec` and write tests for our source code.
A common way to structure code is with your code classes in ModuleScripts and their tests right next to them. So let's say you have a simple utility class in a ModuleScript called MathUtil
local MathUtil = {}
function MathUtil.add(a, b)
assert(type(a) == "number")
assert(type(b) == "number")
return a + b
end
return MathUtil
To create tests for this file, create a ModuleScript next to it and call it MathUtil.spec. This naming convention is important, as it allows TestEZ to discover the tests.
return function()
local MathUtil = require(script.parent.MathUtil)
describe("add", function()
it("should verify input", function()
expect(function()
local result = MathUtil.add("1", 2)
end).to.throw()
end)
it("should properly add positive numbers", function()
local result = MathUtil.add(1, 2)
expect(result).to.equal(3)
end)
it("should properly add negative numbers", function()
local result = MathUtil.add(-1, -2)
expect(result).to.equal(-3)
end)
end)
end
For a full breakdown on writing tests with TestEZ, please take a look at the official documentation.
3. Create a test runner
In this step, we need to tell TestEZ where to find our tests. So create a Script in ServerScriptService with this :
local TestEZ = require(game.ReplicatedStorage.TestEZ)
-- add any other root directory folders here that might have tests
local testLocations = {
game.ServerStorage,
}
local reporter = TestEZ.TextReporter
--local reporter = TestEZ.TextReporterQuiet -- use this one if you only want to see failing tests
TestEZ.TestBootstrap:run(testLocations, reporter)
4. Run your tests
Now we can run the game and check the Output window. We should see our tests output :
Test results:
[+] ServerStorage
[+] MathUtil
[+] add
[+] should properly add negative numbers
[+] should properly add positive numbers
[+] should verify input
3 passed, 0 failed, 0 skipped - TextReporter:87
Automation Testing
Unfortunately, there does not exist a way to fully automate the testing of your game.
You can use TestService to create tests that automate the testing of some interactions, like a player touching a kill block or checking bullet paths from guns. But there isn't a publicly exposed way to start your game, record inputs, and validate the game state.
There's an internal service for this, and a non-scriptable service for mocking inputs but without overriding CoreScripts, it's really not possible at this moment in time.

Testing with `use-ok` on modules with a `MAIN` definition

I am writing a new Perl 6 project for work, and would like to be able to test whether all parts can be used correctly. For this, I'm using the use-ok subroutine from the Test module. I'm trying to easily test all module files using the following code:
"META6.json".IO.slurp.&from-json<provides>
.grep(*.value.starts-with("lib")).Hash.keys
.map({ use-ok $_ })
My issue here is that there are a few files that contain a definition for a MAIN subroutine. From the output I see when running prove -e 'perl6 -Ilib' t, it looks like one of the files is having their MAIN executed, and then the testing stops.
I want to test whether these files can be used correctly, without actually running the MAIN subs that are defined within them. How would I do this?
The MAIN of a file is only executed if it is in the top level of the mainline of a program. So:
sub MAIN() is export { } # this will be executed when the mainline executes
However, if you move the MAIN sub out of the toplevel, it will not get executed. But you can still export it.
{
sub MAIN() is export { } # will *not* execute
}
Sorry for it taking so long to answer: it took a while for me to figure what the question was :-)

Jenkins' EnvInject Plugin does not persist values

I have a build that uses EnvInject Plugin to set an environmental value.
A different job needs to scan last good Jenkins build of that job and get the value of that environmental variable.
This all works well, except sometimes the variable will disappear from build history. It seems that after some time passes, when I look at the 'Environment variables' section in build history, the injected value simply disappears.
How can I make this persist? Is this a bug, or part of the design?
If it make any difference, the value of the injected variable is +1500 chars and in the following format: 'component1=1.1.2;component2=1.1.3,component3=4.1.2,component4=1.1.1,component4=1.3.2,component4=1.1.4'
Looks like EnvInject and/or JobDSL have a bug.
Steps to reproduce:
Set up a job that runs this JobDSL:
job('run_deploy_mock') {
steps {
environmentVariables {
env('deployedArtifacts', 'component1=1.0.0.2')
}
}
}
Run it and it will create a job called 'deploy_mock'
Run the 'deploy_mock' job. After build #1 is done, go to build details and check 'Environmental Variables' section for an entry called 'component1'
Run the JobDSL job again
Check 'Environmental Variables' section for 'deploy_mock' build #1. The 'component1' variable is now missing.
If I substitute the '=' for something else, it works as expected.
Created Jenkins Jira

Protractor - How to separate each test to one file and separate variabiles

I have some komplex protractor test written but everything is in one file.
Where I'm on top of it loading all variabiles like:
var userLogin = "John";
and after that somewhere in code I use it together.
What I need to do is
1. Separate all variabiles to aditional file (some config file)
2. Each test to one file
1- I try to make config.js where I add all variabiles and i required it in protractor.conf.js it load correctly problem is that when i use any of this variabiles in some test it's not working (test fail with "userName is not defined")
I know there is a way where i requre config.file in each test script but that's really not best option in my eyes.
2- How can I know what I did in last script if it's separate, like for example how to know I am logged in?
Thanks.
There are multiple things you can make use of.
2) How can I know what I did in last script if it's separate, like for example how to know I am logged in?
This is where beforeEach(), afterEach() can help:
To help a test suite DRY up any duplicated setup and teardown code,
Jasmine provides the global beforeEach and afterEach functions. As the
name implies, the beforeEach function is called once before each spec
in the describe is run, and the afterEach function is called once
after each spec.
There are also beforeAll(), afterAll() available in jasmine 2, or via jasmine-beforeAll third-party for jasmine 1:
The beforeAll function is called only once before all the specs in
describe are run, and the afterAll function is called after all specs
finish. These functions can be used to speed up test suites with
expensive setup and teardown.
1) I try to make config.js where I add all variabiles and i required
it in protractor.conf.js it load correctly problem is that when i use
any of this variabiles in some test it's not working (test fail with
"userName is not defined") I know there is a way where i requre
config.file in each test script but that's really not best option in
my eyes.
One option which I've personally used would be to create a config.js file with all the reusable configuration variables you would need in multiple tests and require the file once - in the protractor config - then set it as a params configuration key value:
var config = require("./config.js");
exports.config = {
...
params: config,
...
};
where config.js is, for example:
var config;
config = {
user: {
login: "user",
password: "password"
}
};
module.exports = config;
Then, you would not need to require config.js in every test, but instead, you'll use browser.params. For example:
expect(browser.params.user.login).toEqual("user");
Also, if you need some sort of a global test preparation step, you can do it in onPrepare() function, see Setting Up the System Under Test. Example configuration that performs a "global" login step is available here.
And an another quick note: you can have custom globally defined variables (like built-in browser or protractor), set them using global in onPrepare. For example, I've defined protractor.ExpectedConditions as a custom global variable:
onPrepare: function () {
global.EC = protractor.ExpectedConditions;
}
Then, in tests, don't require anything, `EC variable would be available in the scope, e.g.:
browser.wait(EC.invisibilityOf(scope.page.dropdown), 5000)
Also, organizing your tests using "Page Object Pattern" would also help to solve the reusability and modularity problem.

ScalaTest and IntelliJ - Running one Test at a time

I have a ScalaTest which extends the FlatSpec. I have many tests inside the test and I now want to have the possibility to run one test at a time. No matter what I do, I can't get IntelliJ to do it. In the Edit Configurations of the test, I can specify that it should run one test at a time by giving the name of the test. For example:
it should "test the sample multiple times" in new MyDataHelper {
...
}
where I gave the name as "test the sample multiple times", but it does not seem to take that and all I get to see is that it just prints Empty Test Suite. Any ideas how can this be done?
If using Gradle, go to Preferences > Build, Execution, Deployment > Build Tools > Gradle and in the Build and run > Run tests using: section, select IntelliJ IDEA if you haven't already.
An approach that works for me is to right-click (on Windows) within the definition of the test, and choose "Run MyTestClass..." -- or, equivalently, Ctrl-Shift-F10 with the cursor already inside the test. But it's a little delicate and your specific example may be causing your problem. Consider:
class MyTestClass extends FlatSpec with Matchers {
"bob" should "do something" in {
// ...
}
it should "do something else" in {
// ...
}
"fred" should "do something" in {
// ...
}
it should "do something else" in {
// ...
}
}
I can use the above approach to run any of the four tests individually. Your approach based on editing configurations works too. But if I delete the first test I can't run the second one individually -- the others are still fine. That's because a test that starts with it is intended to follow one that doesn't -- then the it is replaced with the appropriate string in the name of the test.
If you want to run the tests by setting up configurations, then the names of these four tests are:
bob should do something
bob should do something else
fred should do something
fred should do something else
Again, note the substitution for it -- there's no way to figure out the name of a test starting with it if it doesn't follow another test.
I'm using IntelliJ Idea 13.1.4 on Windows with Scala 2.10.4, Scala plugin 0.41.1, and ScalaTest 2.1.0. I wouldn't be surprised if this worked less well in earlier versions of Idea or the plugin.
I just realized that I'm able to run individual tests with IntelliJ 13.1.3 Community Edition. With the one that I had earlier 13.0.x, it was unfortunately not possible.