I am writing test cases for my application and most of my controllers contain common code for CRUDs so I have written common macro and use it inside my controllers. Test cases for all of the controllers would be written automatically. But I am confused about how to make this common code overridable so that I can override whenever I want.
defmodule Qserv.ControllerTest do
defmacro __using__(_options) do
quote location: :keep do
use Qserv.Web.ConnCase, async: true
# this kernel will give me access to current `#model` and `#controller`
use Qserv.KernelTest
describe "#{#controller}.create/2" do
test "All required fields set `required` in model should generate errors that these fields are missing -> One, two, All"
test "Only required fields should create record and match the object"
end
# defoverridable index: 2, I want to override above `describe` completely or/and the included test cases
end
end
end
Any help/idea how to achieve this?
I am generally not a fan of the "let's do things to undo it later". It generally forces developers to keep a stack in their head of how things are added and removed later on.
In this case in particular, you are coupling on the test name. Imagine someone decides to make the "two" in "One, two, All" uppercase. Now all of the future overrides won't apply and you will have duplicate tests.
A better solution to explicit opt in what you need. For example, you can define smaller macros that you use when necessary:
describe_create!
describe_update!
...
describe_delete!
Maybe you could have describe_restful! that invokes all of them. The lesson here is to have small building blocks that you build on top of instead of having a huge chunk that you try to break apart later.
PS: please use better names than the describe_x that I used. :)
Given a function that does some operation on a database for instance.
In python, it would look like the following:
def dao_transfer(cnx, account_id1, account_id2, money):
spam = cnx.execute('query 1', account_id1)
egg = cnx.execute('query 2', account_2, spam)
return egg
What I do in my unit tests is that I mock the cnx object, and only check
that cnx.execute is called and that dao_transfer returns the last mock return value of cnx.execute.
I have the feeling that this poor testing.
What Django does, is that it doesn't unittest functions that interacts with the database but instead, do what I call integration tests ie. you spawn a full database, you load it with some initial data, execute your side-effect powered function and check that the database is in the correct state.
I could do that, but I'd rather prefer unittests. Is there an approach to unit testing side effect function that doesn't rely on checking every single side effect function calls arguments?
Is there pure approach to IO that allows better unit testing?
I keep getting phantomjs error - unreachablebrowserexception.
I want to mark the test as skipped or passed in the catch block of this managed exception. How do I do that?
You can use JUnit's Assume class at the beginning of your test to mark the test as ignored/skipped based on a condition.
For example:
#Test
public void myTest() {
Assume.assumeTrue(yourBooleanCondition);
// continue your test steps...
}
You can read about its different applications here.
However, being able to mark a failed test as passed is against the test mantra and defeats the purpose of developing such code entirely. I do not know any framework that would allow you to do that. If you absolutely have to, my guess is to fiddle with the results.
The following function iterates through the names of directories in the file system, and if they are not in there already, adds these names as records to a database table. (Please note this question applies to most languages).
def find_new_dirs():
dirs_listed_in_db = get_dirs_in_db()
new_dirs = []
for dir in get_directories_in_our_path():
if dir not in dirs_listed_in_db:
new_dirs.append(dir)
return new_dirs
I want to write a unit test for this function. However, the function has a dependency on an external component - a database. So how should I write this test?
I assume I should 'mock out' the database. Does this mean I should take the function get_dirs_in_db as a parameter, like so?
def find_new_dirs(get_dirs_in_db):
dirs_listed_in_db = get_dirs_in_db()
new_dirs = []
for dir in get_directories_in_our_path():
if dir not in dirs_listed_in_db:
new_dirs.append(dir)
return new_dirs
Or possibly like so?
def find_new_dirs(db):
dirs_listed_in_db = db.get_dirs()
new_dirs = []
for dir in get_directories_in_our_path():
if dir not in dirs_listed_in_db:
new_dirs.append(dir)
return new_dirs
Or should I take a different approach?
Also, should I design my whole project this way from the start? Or should I refactor them to this design when the need arises when writing tests?
What you're describing is called dependency injection and yes, it is a common way of writing testable code. The second method you outlined (where you would pass in the db) is probably more common. Also, you can have the db parameter to your function take a default value so you are able to only specify the mock db in testing cases.
Whether to write your code that way at the outset or modify it later would be a matter of opinion, but if you adhere to the Test-driven development (TDD) methodology then you would write your tests before your code-under-test anyway.
There are other ways to deal with this problem, but you're asking a broad question at that point.
I take it these code fragments are python, which I'm not familiar with, but in any case this looks like the methods are detached from any stateful object and I'm not sure if that's idiomatic python or simply your design.
In an OOD you'd want an object that holds a data access object in its state (similar to your 2nd version) and mock that object for tests. You'd also want to mock the get_directories_our_path part.
As for when this design should be done - as the first step before creating the first code file. You should use dependency injection throughout your code. This will aid in testing as well as decoupling and increased reusability of your classes.
I have many methods in my class , when i run the code ,methods are called randomly , but in my class ,every method depends on its predeccessor ,ie 2nd method depends on the 1st method , 3rd method depends on the 2nd method and so on .I want to exectue all the methods sequentially
I have tried the below metods and tested the code ,but still the methods are called randomly
//using sequential
#Test(sequential = true)
public void Method1(){
}
#Test(sequential = true)
public void Method2(){
}
//using singleThreaded
#Test(singleThreaded=true)
public void Method1(){
}
#Test(singleThreaded=true)
public void Method2(){
}
I have passed the following parameter in the testng as well
<test name="Test" preserve-order="true" annotations="JDK">
<classes>
<class name="com.test" >
<methods>
<include name="method1"/>
<include name="method2"/>
<include name="method3"/>...
</methods>
</class>
</classes>
</test>
</suite>
When I tested it with #Test(dependsOnMethod="") ,instead of executing the methods sequentially, the methods are getting skipped.
How to execute the test sequentially in testng?
if you want to run your all test method in some specific manner, than just add priority in your #test annotation. see following:-
#test(priority=0)
function1()
{}
#test(priority=1)
function2()
{}
#test(priority=5)
function3()
{}
#test(priority=3)
function4()
{}
#test(priority=2)
function5()
{}
In this case function1 call first than function2, after function 5 will call instead of function3 because of priority.
Dependsonmethods will make your tests as skipped if the method they depend on fails. This is logically correct since if testB depends on testA then if testA fails, then there's no use of running testB. If all you need is testB runs after testA and testB doesn't depend on the result of testA, then add a alwaysrun=true to your #test annotation. These are known as. Soft dependencies. Refer here.
First off, you don't need the sequential or singleThreaded parameters. Writing the single methods in the suite is all you need. Your problem probably lies somewhere else. Make sure you're starting the test using your suite, not the class itself.
In case you don't want to use suites every time (because it's tedious, error-prone and unflexible) here are a few solutions to this problem.
Put dependent methods into one method and make excessive use of Reporter.log(String, boolean). This is similar to System.out.println(String), but also saves the String to TestNG reports. For the second argument you always want to pass true - it tells whether the message should also be printed to STDOUT. When doing this before every step, the test output alone should be enough to identify problematic (read failing) test steps.
Additionally when doing this you have the possibility of using Soft Assertsions. This basically means that you don't have to abort and fail a whole test only because one optional step doesn't work. You can continue until next critical point and abort then or at the end. The errors will still be saved and you can decide whether you want to mark a test run as failed or unstable.
Use #Test(priority = X) where X is a number. Mark all your test methods with a priority and they'll be executed in the order of the priority number from lowest to highest. The upside of this is that you can put methods inbetween steps and the single annotations are independent. The downside however is that this doesn't force any hard dependencies, but only the order. I.e. if method testA with priority=1 fails, method testB with priority=2 will be executed nonetheless.
You probably could work around this using a listener though. Haven't tried this yet.
Use #Test(dependsOnMethods = {"testA"}). Note that the argument here is not a string, but a list of strings (you have that wrong in your post). The upside is hard dependencies, meaning when testB depends on testA, a failure of testA marks testB as skipped. The downside of this annotation is that you have to put all your methods in a very strict chain where every method depends on one other method. If you break this chain, e.g. having multiple methods not depend on anything or having some methods depend on same methods, you'll get into hell's kitchen...
Using both priority and dependsOnMethods together doesn't get you where you'd expect it to unfortunately. Priority is mostly ignored when using hard dependencies.
You can control the execution using #Priority with Listeners. See this link - http://beust.com/weblog/2008/03/29/test-method-priorities-in-testng/
Try to use dependsOnMethods dependency from TestNG framework.
I assume, based on your annotations, that you're using TestNG. I agree with others here that 'sequential' is not the way to go.
Option 1: dependsOnMethods
If your intent is that downstream methods are not even attempted when an upstream dependency fails, try dependsOnMethods instead. With this setup, if an earlier test fails, further tests are skipped (rather than failed).
Like this:
// using dependsOnMethods
#Test
public void method1(){
// this one passes
}
#Test(dependsOnMethods = {"method1"})
public void method2(){
fail("assume this one fails");
}
#Test(dependsOnMethods = {"method1"})
public void method3(){
// this one runs, since it depends on method1
}
#Test(dependsOnMethods = {"method2"})
public void method4(){
// this one is skipped, since it depends on method2
}
Option 2: priority
If your intent is that all tests are executed, regardless of whether upstream test pass, you can use priority.
Like this:
Other notes
I agree wholeheartedly with sircapsalot, that tests should be small and self-contained. If you're using a lot of dependencies, you probably have a problem with your overall test framework.
That said, there are instances where some tests need to run first, others need to run last, etc. And there are instances where tests should be skipped if others fail. For example, if you have tests "updateOneRecord" and "updateManyRecords", you could logically skip multiple if one didn't work.
(And as a final note, your method names should begin with lowercase characters.)
This might not be the best design. I'd advise that you make an attempt to disperse from this design.
Good test architecture requires that each method be SELF-SUFFICIENT and should not depend on other tests to complete before continuing. Why? Because say Test 2 depends on Test 1. Say Test 1 fails.. Now Test 2 will fail.. eventually, you'll have Test 1,2,3,4,5 tests failing, and you don't even know what the cause was.
My recommendation to you would be to create self-sufficient, maintainable, and short tests.
This is a great read which will help you in your endeavours:
http://www.lw-tech.com/q1/ug_concepts.htm