I have a pretty nice Gulp based Karma-Jasmine unit test workflow going. My question is about how to avoid source conflicts in one's tests. Without having to do anything, Karma-Jasmine has auto-magically exposed my src files to Jasmine and detected my tests. I can't see how this would be useful in a real codebase where things don't fit the happy-path.
Example: Create two files you would like to test that both implement a function with the same name, Test(). One returns true, the other false. Which one is your test actually testing? Do I have any control over this? I want to be able to test both (forget telling me about better JS design, that is obvious).
Related
Suppose I have a type MyType with a private method (mt *MyType) private() in a package mypackage.
I also have a directory tests, where I want to store tests for my package. This is how tests/mypackage_test.go looks like:
package mypackage_test
import (
"testing"
"myproj/mypackage"
)
func TestPrivate(t *testing.T) {
// Some test code
}
However, when I run go test I get the cannot refer to unexported field or method my package.(*MyType)."".private) error. I've googled a bit and found out that functions starting with lower case can not be seen outside their own package (and this seems to be true, 'cause upper case functions are freely callable from the tests).
I also read somewhere that adding <...>_internal_test.go to the test file could solve my problem like this (tests/mypackage_internal_test.go):
package mypackage
import (
"testing"
)
func TestPrivate(t *testing.T) {
mt := &MyType{}
// Some test code
}
But with this I only get undefined: MyType. So, my question: how can I test internal/private methods?
Why do you place your tests in a different package? The go testing mechanism uses _test as a suffix for test files so you can place tests in the same packages as the actual code, avoiding the problem you describe. Placing tests in a separate package is not idiomatic Go. Do not try to fight the Go conventions, it's not worth the effort and you are mostly going to lose the fight.
Go insists that files in the same folder belong to the same package, that is except for _test.go files. Moving your test code out of the package allows you to write tests as though you were a real user of the package. You cannot fiddle around with the internals, instead you focus on the exposed interface and are always thinking about any noise that you might be adding to your API.
And:
If you do need to unit test some internals, create another file with _internal_test.go as the suffix. Internal tests will necessarily be more brittle than your interface tests — but they’re a great way to ensure internal components are behaving, and are especially useful if you do test-driven development.
Source: https://medium.com/#matryer/5-simple-tips-and-tricks-for-writing-unit-tests-in-golang-619653f90742
There are different opinions on how you should struct you tests within a golang project and I suggest you to read the blog above.
Currently, I define the following function in the REPL at the start of a coding session:
(defn rt []
(let [tns 'my.namespace-test]
(use tns :reload-all)
(cojure.test/test-ns tns)))
And everytime I make a change I rerun the tests:
user=>(rt)
That been working moderately well for me. When I remove a test, I have to restart the REPL and redefine the method which is a little annoying. Also I've heard bad rumblings about using the use function like this. So my questions are:
Is using use this way going to cause me a problem down the line?
Is there a more idiomatic workflow than what I'm currently doing?
most people run
lein test
form a different terminal. Which guarantees that what is in the files is what is tested not what is in your memory. Using reload-all can lead to false passes if you have changed a function name and are still calling the old name somewhere.
calling use like that is not a problem in it's self, it just constrains you to not have any name conflicts if you use more namespaces in your tests. So long as you have one, it's ok.
using lein lets you specify unit and integration tests and easily run them in groups using the test-selectors feature.
I also run tests in my REPL. I like doing this because I have more control over the tests and it's faster due to the JVM already running. However, like you said, it's easy to get in trouble. In order to clean things up, I suggest taking a look at tools.namespace.
In particular, you can use clojure.tools.namespace.repl/refresh to reload files that have changed in your live REPL. There's alsorefresh-all to reload all the files on the classpath.
I add tools.namespace to my :dev profile in my ~/.lein/profiles.clj so that I have it there for every project. Then when you run lein repl, it will be included on the classpath, but it wont leak into your project's proper dependencies.
Another thing I'll do when I'm working on a test is to require it into my REPL and run it manually. A test is just a no-argument function, so you can invoke them as such.
I am so far impressed with lein-midje
$ lein midje :autotest
Starts a clojure process watching src and test files, reloads the associated namespaces and runs the tests relevant to the changed file (tracking dependencies). I use it with VimShell to open a split buffer in vim and have both the source and the test file open as well. I write a change to either one and the (relevant) tests are executed in the split pane.
Can any one please tell me is there any kind of tools or eclipse base plugins available for generate relevant test cases for SalesForce platform related Apex classes. It seems with code coverage they are not expecting out come like we expect with JUnit, they want to cover whether, test cases are going through the flows of the source classes (like code go through).
Please don't get this post in wrong, I don't want anyone is going to write test cases for my codes :). I have post this question due to nature of SalesForce expecting that code coverage should be. Thanks.
Although Salesforce requires a certain percentage of code coverage for your test cases, you really need to be writing cases that check the results to ensure that the code behaves as designed.
So, even if there was a tool that could generate code to get 100% coverage of your test class, it wouldn't be able to test the results of those method calls, leaving you with a false sense of having "tested code".
I've found that breaking up long methods into separate, sometimes static, methods makes it easier to do unit testing. You can test each individual method, and not worry so much about tweaking parameters to a single method so that it covers all execution paths.
it's now possible to generate test classes automatically for your class/trigger/batch. You can install "Test Class Generator" app from AppExchange and see it working.
This would really help you generating test class and saves lot of your development time.
When designing a new testing system, Should I keep tests in tree with the source-code, for example, in a test sub-directory of the application root. Therefore running the tests for a given branch of your code only against that branch.
OR
Should I keep a separate source tree for the tests, and run the latest tests against all branches of the project?
Our team, and most teams where I work put unit tests in the same tree as the code, but put the rest of the tests (aka the tests the test team writes) in a separate tree. Organizationally, the test tree mirrors the code tree (i.e. the directory structure is the same (mostly)).
But we write a lot of test code.
I do it both ways. It depends:
If the system under test is distributed as binary, I keep tests next to the code. I prefer this because it makes it easier to switch between tests and source. It also increases the visibility of tests to other developers, especially those who aren't yet writing their own tests.
However, if the system under test is distributed as source code, I keep tests in a parallel tree so that people can choose to get the working system only.
I think keeping them with the source code is better. It's often the case that expected behavior for a given piece of code changes, and the tests then change as well. Also, as you add functionality, you will get new tests that would fail on older code.
That said, it makes good sense to have tests in a separate directory of your source repository, so you can easily get rid of them when releasing. But they should be versioned together with the code they're testing, in any case.
Where i am working we have the following issue:
Our current test procedure is that our business analyst test the release based on their specifications/tests. If it passes these tests it is given to the quality dept where they test the new release and the entire system to check if something else was broken.
Just to mention that we outsource our development. Unfortunately the release given to us is rarely tested by the developers and thats "the relationship" we have with them these last 7 years....
As a result if the patch/release fails the tests at the functionality testing level or at the quality level with each patch given we need to test the whole thing again not just the release.
Is there a way we can prevent this from happening?
You have two options:
Separate the code into independent modules so that a patch/change in one module only means you have to re-test that one module. However, due to dependencies this is effective only to a very limited degree.
Introduce automated tests so that re-testing is not as expensive. It takes some more work at fist, but will definitely pay off in your scenario. You don't have to do unit test or TDD - integration tests based on capture-replay tools are often easier to introduce in your scenario (established project with manual testing process).
Implement a continuous testing framework that you and the developers can access. Someething like CruiseControl.Net and NUnit to automate the functional tests.
Given access, they'll be able to see nightly tests on the build. Heck, they don't even need to test it themselves, your tests will be being run every night (or regularly), and they'll know straight away what faults they've caused, or fixed, if any.
Define a 'Quality SLA' - namely that all unit tests must pass, all new code must have a certain level of coverage, all new code must have a certain score in some static analysis checker.
Of course anything like this can be gamed, so have regular post release debriefs where you discuss areas of concern and put in place contingency to avoid it in future.
Implement GO server with Dashboard and handle with GO Agent GUI at your end.
http://www.thoughtworks-studios.com/forms/form/go/downloadlink text