I am writing a new Perl 6 project for work, and would like to be able to test whether all parts can be used correctly. For this, I'm using the use-ok subroutine from the Test module. I'm trying to easily test all module files using the following code:
"META6.json".IO.slurp.&from-json<provides>
.grep(*.value.starts-with("lib")).Hash.keys
.map({ use-ok $_ })
My issue here is that there are a few files that contain a definition for a MAIN subroutine. From the output I see when running prove -e 'perl6 -Ilib' t, it looks like one of the files is having their MAIN executed, and then the testing stops.
I want to test whether these files can be used correctly, without actually running the MAIN subs that are defined within them. How would I do this?
The MAIN of a file is only executed if it is in the top level of the mainline of a program. So:
sub MAIN() is export { } # this will be executed when the mainline executes
However, if you move the MAIN sub out of the toplevel, it will not get executed. But you can still export it.
{
sub MAIN() is export { } # will *not* execute
}
Sorry for it taking so long to answer: it took a while for me to figure what the question was :-)
Related
I´m currently writing tests for my application and therefore, I have to test some click.group commands I defined:
Let´s say I defined them like:
#click.group(cls=MyGroup)
#click.pass_context
def myapp(ctx):
init_stuff()
#myapp.command()
#click.option('--myOption')
def foo(myOption: str) -> None:
do_stuff() # change some files, print, create other files
I know that I could use the CliRunner from click.testing. However, I just want to make sure, that the command is called, but I DONT WANT it to execute any code (for example by applying the CliRunner.invoke()).
How could this be done?
I couldn´t come up with a solution using mocking with foo for example. Or do I have to execute code lets say using the isolated_filesystem() which CliRunner provides?
So the question is: What would be the most efficient way to test my commands when defined like shown above?
Many thanks in advance
You could add a --dry-run flag to your group or some commands, and save it it inside the context, and if the flag is enabled, do not execute any code. Then you can use CliRunner.invoke() with the --dry-run flag enabled and just check your invocations have happened, without actually executing the code.
We have a package with a fair number of complex tests. As part of the test suite, they run on builds etc.
func TestFunc(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run()
}
Now, for one of these tests, I want to introduce some kind of frontend which will make it possible for me to debug a few things. It's not really a test, but a debug tool. For this, I want to just run the same test but with a Builder pattern:
func TestFuncWithFrontend(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run().WithHTTPFrontend(":9999")
}
The test then would only start if I send a signal via HTTP from the frontend. Basically WithHTTPFrontend() just waits with a channel on a HTTP call from the frontend.
This of course would make the automated tests fail, because no such signal will be sent and execution will hang.
I can't just rename the package to main because the package has 15 files and they are used elsewhere in the system.
Likewise I haven't found a way to run a test only on demand while excluding it from the test suite, so that TestFuncWithFrontend would only run from the commandline - I don't care if with go run or go test or whatever.
I've also thought of ExampleTestFunc() but there's so much output produced by the test it's useless, and without defining Output: ..., the Example won't run.
Unfortunately, there's also a lot of initialization code at (private, i.e. lower case) package level that the test needs. So I can't just create a sub-package main, as a lot of that stuff wouldn't be accessible.
It seems I have three choices:
Export all this initialization variables and code with upper case, so that I could be using it from a sub-main package
Duplicate the whole code.
Move the test into a sub-package main and then have a func main() for the test with Frontend and a _test.go for the normal test, which would have to import a few things from the parent package.
I'd rather like to avoid the second option...And the first is better, but isn't great either IMHO. I think I'll go for the third, but...
am I missing some other option?
You can pass a custom command line argument to go test and start the debug port based on that. Something like this:
package hello_test
import (
"flag"
"log"
"testing"
)
var debugTest bool
func init() {
flag.BoolVar(&debugTest, "debug-test", false, "Setup debugging for tests")
}
func TestHelloWorld(t *testing.T) {
if debugTest {
log.Println("Starting debug port for test...")
// Start the server here
}
log.Println("Done")
}
Then if you want to run just that specific test, go test -debug-test -run '^TestHelloWorld$' ./.
Alternatively it's also possible to set a custom environment variable that you check in the test function to change behaviour.
I finally found an acceptable option. This answer
Skip some tests with go test
brought me to the right track.
Essentially using build tags which would not be present in normal builds but which I can provide when executing manually.
Is there any way to generate code coverage report when using SimpleTest similar to PHPUnit.
I have read the documentation of SimpleTest on their website but can not find a clear way on how to do it!
I came across this website that says
we can add require_once (dirname(__FILE__).'/coverage.php')
to the intended file and it should generate the report, but it did not work!
If there is a helpful website on how to generate code coverage, please share it here.
Thanks alot.
I could not get it to work in the officially supported way either, but here is something I got working that I was able to hack together by examining their code. This works for v1.1.7 of SimpleTest, not their master code. At the time of this writing v1.1.7 is the latest release, and works with new versions of PHP 7, even though it is an old release.
First off you have to make sure you have Xdebug installed, configured, and working. On my system there is both a CLI and Apache version of the php.ini file that have to be configured properly depending on if I am trying to use PHP through Apache or just directly from the terminal. There are alternatives to Xdebug, but most people us Xdebug.
Then, you have to make the PHP_CodeCoverage library accessible from your code. I recommend adding it to your project as a composer package.
Now you just have to manually use that library to capture code coverage and generate a report. How exactly you do that will depend on how you run your tests. Personally, I run my tests on the terminal, and I have a bootstrap file that php runs before it starts the script. At the end of the bootstrap file, I include the SimpleTest autorun file so it will automatically run the tests in any test classes that get included like so:
require_once __DIR__.'/vendor/simpletest/simpletest/autorun.php';
Somewhere inside your bootstrap file you will need to create a filter, whitelist the directories and files you want to get reported, create a coverage object and pass in the filter to the constructor, start coverage, and create and register a shutdown function that will change the way SimpleTest executes the tests to make sure it also stops the coverage and generates the coverage report. Your bootstrap file might look something like this:
<?php
require __DIR__.'/vendor/autoload.php';
$filter = new \SebastianBergmann\CodeCoverage\Filter();
$filter->addDirectoryToWhitelist(__DIR__."/src/");
$coverage = new \SebastianBergmann\CodeCoverage\CodeCoverage(null, $filter);
$coverage->start('<name of test>');
function shutdownWithCoverage($coverage)
{
$autorun = function_exists('\run_local_tests'); // provided by simpletest
if ($autorun) {
$result = \run_local_tests(); // this actually runs the tests
}
$coverage->stop();
$writer = new \SebastianBergmann\CodeCoverage\Report\Html\Facade;
$writer->process($coverage, __DIR__.'/tmp/code-coverage-report');
if ($autorun) {
// prevent tests from running twice:
exit($result ? 0 : 1);
}
}
register_shutdown_function('\shutdownWithCoverage', $coverage);
require_once __DIR__.'/vendor/simpletest/simpletest/autorun.php';
It took me some time to figure out, as - to put it mildly - the documentation for this feature is not really complete.
Once you have your test suite up and running, just include these lines before the lines that are actually running it:
require_once ('simpletest/extensions/coverage/coverage.php');
require_once ('simpletest/extensions/coverage/coverage_reporter.php');
$coverage = new CodeCoverage();
$coverage->log = 'coverage/log.sqlite'; // This folder should exist
$coverage->includes = ['.*\.php$']; // Modify these as you wish
$coverage->excludes = ['simpletest.*']; // Or it is even better to use a setting file
$coverage->maxDirectoryDepth = '1';
$coverage->resetLog();
$coverage->startCoverage();
Then run your tests, for instance:
$test = new ProjectTests(); //It is an extension of the class TestSuite
$test->run(new HtmlReporter());
Finally generate your reports
$coverage->stopCoverage();
$coverage->writeUntouched();
$handler = new CoverageDataHandler($coverage->log);
$report = new CoverageReporter();
$report->reportDir = 'coverage/report'; // This folder should exist
$report->title = 'Code Coverage Report';
$report->coverage = $handler->read();
$report->untouched = $handler->readUntouchedFiles();
$report->summaryFile = $report->reportDir . '/index.html';
And that's it. Based on your setup, you might need to make some small adjustment to make it work. For instance, if you are using the autorun.php from simpletest, that might be a bit more tricky.
I have some komplex protractor test written but everything is in one file.
Where I'm on top of it loading all variabiles like:
var userLogin = "John";
and after that somewhere in code I use it together.
What I need to do is
1. Separate all variabiles to aditional file (some config file)
2. Each test to one file
1- I try to make config.js where I add all variabiles and i required it in protractor.conf.js it load correctly problem is that when i use any of this variabiles in some test it's not working (test fail with "userName is not defined")
I know there is a way where i requre config.file in each test script but that's really not best option in my eyes.
2- How can I know what I did in last script if it's separate, like for example how to know I am logged in?
Thanks.
There are multiple things you can make use of.
2) How can I know what I did in last script if it's separate, like for example how to know I am logged in?
This is where beforeEach(), afterEach() can help:
To help a test suite DRY up any duplicated setup and teardown code,
Jasmine provides the global beforeEach and afterEach functions. As the
name implies, the beforeEach function is called once before each spec
in the describe is run, and the afterEach function is called once
after each spec.
There are also beforeAll(), afterAll() available in jasmine 2, or via jasmine-beforeAll third-party for jasmine 1:
The beforeAll function is called only once before all the specs in
describe are run, and the afterAll function is called after all specs
finish. These functions can be used to speed up test suites with
expensive setup and teardown.
1) I try to make config.js where I add all variabiles and i required
it in protractor.conf.js it load correctly problem is that when i use
any of this variabiles in some test it's not working (test fail with
"userName is not defined") I know there is a way where i requre
config.file in each test script but that's really not best option in
my eyes.
One option which I've personally used would be to create a config.js file with all the reusable configuration variables you would need in multiple tests and require the file once - in the protractor config - then set it as a params configuration key value:
var config = require("./config.js");
exports.config = {
...
params: config,
...
};
where config.js is, for example:
var config;
config = {
user: {
login: "user",
password: "password"
}
};
module.exports = config;
Then, you would not need to require config.js in every test, but instead, you'll use browser.params. For example:
expect(browser.params.user.login).toEqual("user");
Also, if you need some sort of a global test preparation step, you can do it in onPrepare() function, see Setting Up the System Under Test. Example configuration that performs a "global" login step is available here.
And an another quick note: you can have custom globally defined variables (like built-in browser or protractor), set them using global in onPrepare. For example, I've defined protractor.ExpectedConditions as a custom global variable:
onPrepare: function () {
global.EC = protractor.ExpectedConditions;
}
Then, in tests, don't require anything, `EC variable would be available in the scope, e.g.:
browser.wait(EC.invisibilityOf(scope.page.dropdown), 5000)
Also, organizing your tests using "Page Object Pattern" would also help to solve the reusability and modularity problem.
I'd like to know the behaviour of R3 when processing the Needs field of a script header and what implications for word binding it has.
Background. I'm currently trying to port some R2 scripts to R3 in order to learn R3. In R2 the Needs field of a script header was essentially just documentation, though I made use of it with a custom function to reference scripts that are required to make my script run.
R3 appears to call the Needs referenced scripts itself, but the binding seems different to DOing the other scripts.
For example when %test-parent.r is:
REBOL [
title: {test parent}
needs: [%test-child.r]
]
parent: now
?? parent
?? child
and %test-child is:
REBOL [
title: {test child}
]
child: now
?? child
R3 Alpha (Saphiron build 22-Feb-2013/11:09:25) returns:
>> do %test-parent.r
Script: "test parent" Version: none Date: none
child: 9-May-2013/22:51:52+10:00
parent: 9-May-2013/22:51:52+10:00
** Script error: child has no value
** Where: get ajoin case ?? catch either either -apply- do
** Near: get :name
I don't understand why test-parent cannot access Child set by %test-child.r
If I remove the Needs field from test-parent.r header and instead insert a line to just DO %test-child.r then there is no error and the script performs as expected.
Ah, you've run into Rebol 3's policy to "do what you say, it can't read your mind". R3's Needs header is part of its module system, so anything you load with Needs is actually imported as a module, even if it isn't declared as such.
Loading scripts with Needs is a quick way to get them treated as modules even in the original author didn't declare them as such. Modules get their own contexts where their words are defined. Loading a script as a module is a great way to use a script that isn't that tidy, that leaks words into the shared script context. Like your %test-child.r script, it leaks the word child into the script context, what if you didn't want that to happen? Load it with Needs or import and that will clean that right up.
If you want a script treated as a script, use do to run it. Regular scripts use a (mostly) shared context, so when you do a script it has effect on the same context as the script you called it from. That is why the child: now statement affected child in the parent script. Sometimes that's what you want to do, which is why we worked so hard to make scripts work that way in R3.
If you are going to use Needs or import to load your own scripts, you might as well make them modules and export what you want, like this:
REBOL [
type: module
title: {test child}
exports: [child]
]
child: now
?? child
As before, you don't even have to include the type: module if you are going to be using Needs or import anyway, but it would help just in case you run your module with do. R3 assumes that if you declare your module to be a module, that you wrote it to be a module and depend on it working that way even if it's called with do. At the least, declaring a type header is a stronger statement than not declaring a type header at all, so it takes precedence in the conflicting "do what you say" situation.
Look here for more details about how the module system works: How are words bound within a Rebol module?