Setup, runTests and teardown fails at Batman.TestCase - testing

I am trying to get batman testing up and running. Qunit and tests runs fine, but when i use the example:
class SimpleTest extends Batman.TestCase
#test 'A simple test', ->
#assert true
test = new SimpleTest
test.runTests()
I get the following messages when i browse to localhost:3000/qunit:
Setup failed on A simple test: undefined is not a function
Died on test #2 at Test.Batman.TestCase.TestCase.Test.Test.run (localhost:3000/assets/extras/testing/test_case.js?body=1:20:22)
at SimpleTest.Batman.TestCase.TestCase.runTests (localhost:3000/assets/extras/testing/test_case.js?body=1:51:28)
at localhost:3000/assets/simple_test.js?body=1:24:8
at localhost:3000/assets/simple_test.js?body=1:26:4: undefined is not a function
Teardown failed on A simple test: undefined is not a function
In the test_helper.coffee, I manually included the project, sinon and the four test case source files from the github source code found here, including test_case.coffee.
What am I doing wrong?

Depending on how those .coffee source files are loaded, it's possible that they don't have their dependencies loaded first.
You could try this:
Download the 0.16 release from http://batmanjs.org/download.html
Use precompiled batman.testing.js from the release
Make sure QUnit loads batman.js first, then batman.testing.js. (Batman.Object must be defined before you load Batman.TestCase.)
Does that help?

Related

TypeError: userAgent.toLowerCase is not a function

While running Angular-unit test, using chrome browser, ended up with the error as
TypeError: userAgent.toLowerCase is not a function
at _isAndroid (http://localhost:9876/_karma_webpack_/webpack:/node_modules/#angular/forms/fesm2015/forms.mjs:176:43)
at new DefaultValueAccessor (http://localhost:9876/_karma_webpack_/webpack:/node_modules/#angular/forms/fesm2015/forms.mjs:227:38)
at NodeInjectorFactory.factory (http://localhost:9876/_karma_webpack_/webpack:/node_modules/#angular/forms/fesm2015/forms.mjs:254:1)
at getNodeInjectable (http://localhost:9876/_karma_webpack_/webpack:/node_modules/#angular/core/fesm2015/core.mjs:3565:44)
at instantiateAllDirectives (http://localhost:9876/_karma_webpack_/webpack:/node_modules/#angular/core/fesm2015/core.mjs:10318:27)
at createDirectivesInstances (http://localhost:9876/_karma_webpack_/webpack:/node_modules/#angular/core/fesm2015/core.mjs:9647:5)
at ɵɵelementStart (http://localhost:9876/_karma_webpack_/webpack:/node_modules/#angular/core/fesm2015/core.mjs:14561:9)
at templateFn (ng:///TodaysMarketMoversComponent.js:397:66)
at executeTemplate (http://localhost:9876/_karma_webpack_/webpack:/node_modules/#angular/core/fesm2015/core.mjs:9618:9)
at renderView (http://localhost:9876/_karma_webpack_/webpack:/node_modules/#angular/core/fesm2015/core.mjs:9421:13)
Most wondering/confusing part to me- toLowerCase function has never been used through out the component/spec file.
It looks like the userAgent is being checked by the #angular/forms lib and has been deleted or not provided at all for some reason.
If it's not your code that's changing the userAgent then it's probably some third-party script.
If I were you I'd start with the latest additions in terms of libraries and dependencies and peel them back until I get a unit test running. That will give the clue.

Running a main-like in a non-main package

We have a package with a fair number of complex tests. As part of the test suite, they run on builds etc.
func TestFunc(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run()
}
Now, for one of these tests, I want to introduce some kind of frontend which will make it possible for me to debug a few things. It's not really a test, but a debug tool. For this, I want to just run the same test but with a Builder pattern:
func TestFuncWithFrontend(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run().WithHTTPFrontend(":9999")
}
The test then would only start if I send a signal via HTTP from the frontend. Basically WithHTTPFrontend() just waits with a channel on a HTTP call from the frontend.
This of course would make the automated tests fail, because no such signal will be sent and execution will hang.
I can't just rename the package to main because the package has 15 files and they are used elsewhere in the system.
Likewise I haven't found a way to run a test only on demand while excluding it from the test suite, so that TestFuncWithFrontend would only run from the commandline - I don't care if with go run or go test or whatever.
I've also thought of ExampleTestFunc() but there's so much output produced by the test it's useless, and without defining Output: ..., the Example won't run.
Unfortunately, there's also a lot of initialization code at (private, i.e. lower case) package level that the test needs. So I can't just create a sub-package main, as a lot of that stuff wouldn't be accessible.
It seems I have three choices:
Export all this initialization variables and code with upper case, so that I could be using it from a sub-main package
Duplicate the whole code.
Move the test into a sub-package main and then have a func main() for the test with Frontend and a _test.go for the normal test, which would have to import a few things from the parent package.
I'd rather like to avoid the second option...And the first is better, but isn't great either IMHO. I think I'll go for the third, but...
am I missing some other option?
You can pass a custom command line argument to go test and start the debug port based on that. Something like this:
package hello_test
import (
"flag"
"log"
"testing"
)
var debugTest bool
func init() {
flag.BoolVar(&debugTest, "debug-test", false, "Setup debugging for tests")
}
func TestHelloWorld(t *testing.T) {
if debugTest {
log.Println("Starting debug port for test...")
// Start the server here
}
log.Println("Done")
}
Then if you want to run just that specific test, go test -debug-test -run '^TestHelloWorld$' ./.
Alternatively it's also possible to set a custom environment variable that you check in the test function to change behaviour.
I finally found an acceptable option. This answer
Skip some tests with go test
brought me to the right track.
Essentially using build tags which would not be present in normal builds but which I can provide when executing manually.

Code coverage in SimpleTest

Is there any way to generate code coverage report when using SimpleTest similar to PHPUnit.
I have read the documentation of SimpleTest on their website but can not find a clear way on how to do it!
I came across this website that says
we can add require_once (dirname(__FILE__).'/coverage.php')
to the intended file and it should generate the report, but it did not work!
If there is a helpful website on how to generate code coverage, please share it here.
Thanks alot.
I could not get it to work in the officially supported way either, but here is something I got working that I was able to hack together by examining their code. This works for v1.1.7 of SimpleTest, not their master code. At the time of this writing v1.1.7 is the latest release, and works with new versions of PHP 7, even though it is an old release.
First off you have to make sure you have Xdebug installed, configured, and working. On my system there is both a CLI and Apache version of the php.ini file that have to be configured properly depending on if I am trying to use PHP through Apache or just directly from the terminal. There are alternatives to Xdebug, but most people us Xdebug.
Then, you have to make the PHP_CodeCoverage library accessible from your code. I recommend adding it to your project as a composer package.
Now you just have to manually use that library to capture code coverage and generate a report. How exactly you do that will depend on how you run your tests. Personally, I run my tests on the terminal, and I have a bootstrap file that php runs before it starts the script. At the end of the bootstrap file, I include the SimpleTest autorun file so it will automatically run the tests in any test classes that get included like so:
require_once __DIR__.'/vendor/simpletest/simpletest/autorun.php';
Somewhere inside your bootstrap file you will need to create a filter, whitelist the directories and files you want to get reported, create a coverage object and pass in the filter to the constructor, start coverage, and create and register a shutdown function that will change the way SimpleTest executes the tests to make sure it also stops the coverage and generates the coverage report. Your bootstrap file might look something like this:
<?php
require __DIR__.'/vendor/autoload.php';
$filter = new \SebastianBergmann\CodeCoverage\Filter();
$filter->addDirectoryToWhitelist(__DIR__."/src/");
$coverage = new \SebastianBergmann\CodeCoverage\CodeCoverage(null, $filter);
$coverage->start('<name of test>');
function shutdownWithCoverage($coverage)
{
$autorun = function_exists('\run_local_tests'); // provided by simpletest
if ($autorun) {
$result = \run_local_tests(); // this actually runs the tests
}
$coverage->stop();
$writer = new \SebastianBergmann\CodeCoverage\Report\Html\Facade;
$writer->process($coverage, __DIR__.'/tmp/code-coverage-report');
if ($autorun) {
// prevent tests from running twice:
exit($result ? 0 : 1);
}
}
register_shutdown_function('\shutdownWithCoverage', $coverage);
require_once __DIR__.'/vendor/simpletest/simpletest/autorun.php';
It took me some time to figure out, as - to put it mildly - the documentation for this feature is not really complete.
Once you have your test suite up and running, just include these lines before the lines that are actually running it:
require_once ('simpletest/extensions/coverage/coverage.php');
require_once ('simpletest/extensions/coverage/coverage_reporter.php');
$coverage = new CodeCoverage();
$coverage->log = 'coverage/log.sqlite'; // This folder should exist
$coverage->includes = ['.*\.php$']; // Modify these as you wish
$coverage->excludes = ['simpletest.*']; // Or it is even better to use a setting file
$coverage->maxDirectoryDepth = '1';
$coverage->resetLog();
$coverage->startCoverage();
Then run your tests, for instance:
$test = new ProjectTests(); //It is an extension of the class TestSuite
$test->run(new HtmlReporter());
Finally generate your reports
$coverage->stopCoverage();
$coverage->writeUntouched();
$handler = new CoverageDataHandler($coverage->log);
$report = new CoverageReporter();
$report->reportDir = 'coverage/report'; // This folder should exist
$report->title = 'Code Coverage Report';
$report->coverage = $handler->read();
$report->untouched = $handler->readUntouchedFiles();
$report->summaryFile = $report->reportDir . '/index.html';
And that's it. Based on your setup, you might need to make some small adjustment to make it work. For instance, if you are using the autorun.php from simpletest, that might be a bit more tricky.

Protractor - How to separate each test to one file and separate variabiles

I have some komplex protractor test written but everything is in one file.
Where I'm on top of it loading all variabiles like:
var userLogin = "John";
and after that somewhere in code I use it together.
What I need to do is
1. Separate all variabiles to aditional file (some config file)
2. Each test to one file
1- I try to make config.js where I add all variabiles and i required it in protractor.conf.js it load correctly problem is that when i use any of this variabiles in some test it's not working (test fail with "userName is not defined")
I know there is a way where i requre config.file in each test script but that's really not best option in my eyes.
2- How can I know what I did in last script if it's separate, like for example how to know I am logged in?
Thanks.
There are multiple things you can make use of.
2) How can I know what I did in last script if it's separate, like for example how to know I am logged in?
This is where beforeEach(), afterEach() can help:
To help a test suite DRY up any duplicated setup and teardown code,
Jasmine provides the global beforeEach and afterEach functions. As the
name implies, the beforeEach function is called once before each spec
in the describe is run, and the afterEach function is called once
after each spec.
There are also beforeAll(), afterAll() available in jasmine 2, or via jasmine-beforeAll third-party for jasmine 1:
The beforeAll function is called only once before all the specs in
describe are run, and the afterAll function is called after all specs
finish. These functions can be used to speed up test suites with
expensive setup and teardown.
1) I try to make config.js where I add all variabiles and i required
it in protractor.conf.js it load correctly problem is that when i use
any of this variabiles in some test it's not working (test fail with
"userName is not defined") I know there is a way where i requre
config.file in each test script but that's really not best option in
my eyes.
One option which I've personally used would be to create a config.js file with all the reusable configuration variables you would need in multiple tests and require the file once - in the protractor config - then set it as a params configuration key value:
var config = require("./config.js");
exports.config = {
...
params: config,
...
};
where config.js is, for example:
var config;
config = {
user: {
login: "user",
password: "password"
}
};
module.exports = config;
Then, you would not need to require config.js in every test, but instead, you'll use browser.params. For example:
expect(browser.params.user.login).toEqual("user");
Also, if you need some sort of a global test preparation step, you can do it in onPrepare() function, see Setting Up the System Under Test. Example configuration that performs a "global" login step is available here.
And an another quick note: you can have custom globally defined variables (like built-in browser or protractor), set them using global in onPrepare. For example, I've defined protractor.ExpectedConditions as a custom global variable:
onPrepare: function () {
global.EC = protractor.ExpectedConditions;
}
Then, in tests, don't require anything, `EC variable would be available in the scope, e.g.:
browser.wait(EC.invisibilityOf(scope.page.dropdown), 5000)
Also, organizing your tests using "Page Object Pattern" would also help to solve the reusability and modularity problem.

How to use a dependency of a module within a Play app

I am writing a Play Framework module in order to share some common logic among multiple Play apps. One of the things I would like my module to do is provide some frequently-used functionality by way of 3rd-party modules, for example the excellent Markdown module.
First of all, is it possible to do this? I want all the apps that include my module to be able to use the .markdown().raw() String extension without needing to explicitly declare the Markdown module as a dependency. The Play Framework Cookbook chapter 5 seems to imply that it is possible, unless I am reading it wrong.
Secondly, if it is possible, how does it work? I have created the following vanilla example case, but I'm still getting errors.
I created a new, empty application "myapp", and a new, empty module "mymod", both in the same parent directory. I then modified mymod/conf/dependencies.yml to:
self: mymod -> mymod 0.1
require:
- play
- play -> markdown [1.5,)
I ran play deps on mymod and it successfully downloaded and installed the Markdown module. Running play build-module also worked fine with no errors.
Then, I modified myapp/conf/dependencies.yml to:
# Application dependencies
require:
- play
- mymod -> mymod 0.1
repositories:
- Local Modules:
type: local
artifact: ${application.path}/../[module]
contains:
- mymod
I ran play deps on myapp and it successfully found mymod, and generated the myapp/modules/mymod file, containing the absolute path to mymod.
I ran myapp using play run and was able to see the welcome page on http://localhost:9000/. So far so good.
Next, I modified myapp/app/views/Application/index.html to:
#{extends 'main.html' /}
#{set title:'Home' /}
${"This is _MarkDown_, by [John Gruber](http://daringfireball.net/projects/markdown/).".markdown().raw()}
I restarted myapp, and now I get the following error.
09:03:23,425 ERROR ~
#6a6eppo46
Internal Server Error (500) for request GET /
Template execution error (In /app/views/Application/index.html around line 4)
Execution error occured in template /app/views/Application/index.html. Exception raised was MissingMethodException : No signature of method: java.lang.String.markdown() is applicable for argument types: () values: [].
play.exceptions.TemplateExecutionException: No signature of method: java.lang.String.markdown() is applicable for argument types: () values: []
at play.templates.BaseTemplate.throwException(BaseTemplate.java:86)
at play.templates.GroovyTemplate.internalRender(GroovyTemplate.java:257)
at play.templates.Template.render(Template.java:26)
at play.templates.GroovyTemplate.render(GroovyTemplate.java:187)
at play.mvc.results.RenderTemplate.<init>(RenderTemplate.java:24)
at play.mvc.Controller.renderTemplate(Controller.java:660)
at play.mvc.Controller.renderTemplate(Controller.java:640)
at play.mvc.Controller.render(Controller.java:695)
at controllers.Application.index(Application.java:13)
at play.mvc.ActionInvoker.invokeWithContinuation(ActionInvoker.java:548)
at play.mvc.ActionInvoker.invoke(ActionInvoker.java:502)
at play.mvc.ActionInvoker.invokeControllerMethod(ActionInvoker.java:478)
at play.mvc.ActionInvoker.invokeControllerMethod(ActionInvoker.java:473)
at play.mvc.ActionInvoker.invoke(ActionInvoker.java:161)
at Invocation.HTTP Request(Play!)
Caused by: groovy.lang.MissingMethodException: No signature of method: java.lang.String.markdown() is applicable for argument types: () values: []
at /app/views/Application/index.html.(line:4)
at play.templates.GroovyTemplate.internalRender(GroovyTemplate.java:232)
... 13 more
And just to confirm I'm not crazy, I tried adding the play -> markdown [1.5,) line to myapp/conf/dependencies.yml and restarted the app, and confirmed that it works.
I feel like I'm missing something obvious. Many thanks in advance to anyone who can help! :)
Yes I had the same problem, it seems that transitive dependencies through custom home made modules does not work