I can't trace where I know it from, but normally if one writes a test for method Foo, the corresponding test is named TestFoo.
If one tests an unexported function, say foo, what the name of the test should be then?
My question comes from the fact, that JetBrains IDE for Go, when asked to generate a test for an unexported function, generates something like Test_foo.
This behavior may have sense, because if you have Foo and it's unexported counterpat foo in the same package, you'd want to distinct tests for them somehow (at least for jump to test feature in IDE).
So is there any convention on naming tests after unexported functions?
BTW:
documentation for the Go testing package says, that a test is executed if it is:
any function of the form
func TestXxx(*testing.T)
where Xxx can be any alphanumeric string (but the first letter must not be in [a-z]) and serves to identify the test routine.
Which means, that any test having underscore in its name shouldn't be executed by go test. However, we all know, that such tests work just fine.
My initial confusion with this originated from 2 things:
1) an assumption, that underscore is allowed in test funciton name. This assumption was backed by tons of major opensource projects that do this
2) the fact, that JetBrains Idea generates tests with names containing underscores
The response to my own question: there is a convention (I'd rather say, it's a guideline from Google), which many projects violate. Underscore should not be used in a test name.
I've voted for closing my own question and created a bug-report in JetBrains bugtracker.
https://youtrack.jetbrains.com/issue/GO-5185
Related
I am working in SWI-Prolog.
We have been asked to do some 'pure logic' implementations, and to do so, some modules have to be added to the code (in order to control the execution of the code). Let us take one of them:
:- module(_,_,[]).
My code works fine without it, but when I add it (at the beginning of it), it fails with the message: Arguments are not sufficiently instantiated.
I have also tried adding :- module(_,_,[]). in an empty file and still fails with the same error; so it is not a problem of my code, but a problem of the module.
I have searched for the error, but I only find it related to a different problem: usually using some variable before declaring it, like in (Prolog - Arguments are not sufficiently instantiated).
When I turn to project view, I can see percentages for a single class.
When I go inside, i cant see which methods are covered.
When i take export of results, and open in browser for HTML, I can see some green and red lines.
I can understand if a method has red or does not have any green, it does not have unit test.
But this is hard way.
Are there better ways here? Like: how can i find the unit test of a method if it has an unit test?
Answering on how can i find the unit test of a method if it has an unit test?
I think there is a misconception on your end. Nothing says that there is exactly one (or zero) unit test for a specific method.
It is rather common that there are multiple tests per production code method. For example to test the different results for different cases of input parameters.
It is also possible that a production code method gets executed when some "unrelated" test runs.
From that point of view, the "best" what you can do: select the production code method and have IntelliJ show you its usages. IntelliJ tells you in which module usages are found, and obviously: if the usage is within your unit test module, you know for sure: the method is used in the tests listed there.
But as said: that doesn't mean that other tests aren't running that method when doing their specific testing.
I'm using Rubberduck to unit test my VBA implementations. When using multiple Asserts of the same kind (e.g. Assert.IsTrue) in one TestMethod, the test result is not telling me which of them failed, as far as I can see.
Is there a way to find out which Assert failed or is this on the Rubberduck future roadmap? Of course I could add my own information, e.g. by using Debug.Print before each Assert, but that would cause a lot of extra code.
I know there are different opinions about multiple Asserts in one test, but I chose to have them in my situation and this discussion is already covered elsewhere.
Disclaimer: I'm heavily involved in Rubberduck's development.
The IAssert interface that both the Rubberduck.AssertClass and the Rubberduck.PermissiveAssertClass implement, includes an optional message parameter for every single member:
Simply include a different and descriptive message for each assertion:
Assert.AreEqual expected, actual, "oops, didn't expect this"
Assert.IsTrue result, "truth is an illusion"
The Test Explorer toolwindow will display the custom message under the Message column, only when the assertion fails:
We use TestNG and Selenium WebDriver to test our web application.
Now our problem is that we often have several tests that need to run in a certain order, e.g.:
login to application
enter some data
edit the data
check that it's displayed correctly
Now obviously these tests need to run in that precise order.
At the same time, we have many other tests which are totally independent from the list of tests above.
So we'd like to be able to somehow put tests into "groups" (not necessarily groups in the TestNG sense), and then run them such that:
tests inside one "group" always run together and in the same order
but different test "groups" as a whole can run in any order
The second point is important, because we want to avoid dependencies between tests in different groups (so different test "groups" can be used and developed independently).
Is there a way to achieve this using TestNG?
Solutions we tried
At first we just put tests that belong together into one class, and used dependsOnMethods to make them run in the right order. This used to work in TestNG V5, but in V6 TestNG will sometimes interleave tests from different classes (while respecting the ordering imposed by dependsOnMethods). There does not seem to be a way to tell TestNG "Always run tests from one class together".
We considered writing a method interceptor. However, this has the disadvantage that running tests from inside an IDE becomes more difficult (because directly invoking a test on a class would not use the interceptor). Also, tests using dependsOnMethods cannot be ordered by the interceptor, so we'd have to stop using that. We'd probably have to create our own annotation to specify ordering, and we'd like to use standard TestNG features as far as possible.
The TestNG docs propose using preserve-order to order tests. That looks promising, but only works if you list every test method separately, which seems redundant and hard to maintain.
Is there a better way to achieve this?
I am also open for any other suggestions on how to handle tests that build on each other, without having to impose a total order on all tests.
PS
alanning's answer points out that we could simply keep all tests independent by doing the necessary setup inside each test. That is in principle a good idea (and some tests do this), however sometimes we need to test a complete workflow, with each step depending on all previous steps (as in my example). To do that with "independent" tests would mean running the same multi-step setup over and over, and that would make our already slow tests even slower. Instead of three tests doing:
Test 1: login to application
Test 2: enter some data
Test 3: edit the data
we would get
Test 1: login to application
Test 2: login to application, enter some data
Test 3: login to application, enter some data, edit the data
etc.
In addition to needlessly increasing testing time, this also feels unnatural - it should be possible to model a workflow as a series of tests.
If there's no other way, this is probably how we'll do it, but we are looking for a better solution, without repeating the same setup calls.
You are mixing "functionality" and "test". Separating them will solve your problem.
For example, create a helper class/method that executes the steps to log in, then call that class/method in your Login test and all other tests that require the user to be logged in.
Your other tests do not actually need to rely on your Login "Test", just the login class/method.
If later back-end modifications introduce a bug in the login process, all of the tests which rely on the Login helper class/method will still fail as expected.
Update:
Turns out this already has a name, the Page Object pattern. Here is a page with Java examples of using this pattern:
http://code.google.com/p/selenium/wiki/PageObjects
Try with depends on group along with depends on method. Add all methods in same class in one group.
For example
#Test(groups={"cls1","other"})
public void cls1test1(){
}
#Test(groups={"cls1","other"}, dependsOnMethods="cls1test1", alwaysrun=true)
public void cls1test2(){
}
In class 2
#Test(groups={"cls2","other"}, dependsOnGroups="cls1", alwaysrun=true)
public void cls2test1(){
}
#Test(groups={"cls2","other"}, dependsOnMethods="cls2test1", dependsOnGroups="cls1", alwaysrun=true)
public void cls2test2(){
}
There is an easy (whilst hacky) workaround for this if you are comfortable with your first approach:
At first we just put tests that belong together into one class, and used dependsOnMethods to make them run in the right order. This used to work in TestNG V5, but in V6 TestNG will sometimes interleave tests from different classes (while respecting the ordering imposed by dependsOnMethods). There does not seem to be a way to tell TestNG "Always run tests from one class together".
We had a similar problem: we need our tests to be run class-wise because we couldn't guarantee the test classes not interfering with each other.
This is what we did:
Put a
#Test( dependsOnGroups= { "dummyGroupToMakeTestNGTreatThisAsDependentClass" } )
Annotation on an Abstract Test Class or Interface that all your Tests inherit from.
This will put all your methods in the "first group" (group as described in this paragraph, not TestNG-groups). Inside the groups the ordering is class-wise.
Thanks to Cedric Beust, he provided a very quick answer for this.
Edit:
The group dummyGroupToMakeTestNGTreatThisAsDependentClass actually has to exist, but you can just add a dummy test case for that purpose..
A command line utility/software could potentially consist of many different switches and arguments.
Lets say your software is called CLI and lets say CLI has the following features:
The general syntax of CLI is:
CLI <data structures> <operation> <required arguments> [optional arguments]
<data structures> could be 'matrix', 'complex numbers', 'int', 'floating point', 'log'
<operation> could be 'add', 'subtract', 'multiply', 'divide'
I cant think of any required and optional arguments, but lets say your software does support it
Now you want to test this software. And you wish to test interface itself, not the logic. Essentially the interface must return the correct success codes and error codes.
Essentially a lot of real word software still present a Command Line interface with several options. I am curious if there is any formal testing methodology established for this. One idea i had was to construct a grammar (like EBNF) and describing the 'language' of the interface. But I fail to push this idea ahead. What good is a grammar for in this case? How does it enable the generation of many many combinations .
I am curious to learn more about any theoretical models which could be applied to such a problem or if anyone in here has actually done such testing with satisfying coverage
There is a command-line tool as part of a product i maintain, and i have a situation thats very similar to what you describe. What i did was employ a unit testing framework, and encode each combination of arguments as a test method.
The program is implemented in c#/.NET, so i use microsoft's testing framework that's builtin to Visual Studio, but the approach would work with any unit testing framework.
Each test invokes a utility function that starts the process and sends in the input and cole ts the output. Then, each test is responsible for verifying that the output from the CLI matches what was expected. In some cases, there's a family of test cases that can be performed by a single test method, wih a for loop in it. The logic needs to run the CLI and check the output for each iteration.
The set of tests i have does not cover every permutation of arguments, but it covers the 80% cases and i can add new tests if there are ever any defects.
Using a recursive grammar to generate switches is an interesting idea. If you where to try this then you would need to first write the grammar in such a way that all switches could be used, and then do a random walk of the grammar.
This provides an easy method of randomly walking a grammar and outputting the result.