Having a common setup function for tests - testing

So I have a script test.py
from nose.tools import with_setup
class Test:
#with_setup(setup_func, teardown_func)
def test(self):
print "Hello World"
Can I have setup_func() and teardown_func() defined in an init.py in the same directory as "test.py".
Basically, the objective is to have a common setup and teardown for a bunch of test cases.

There is nothing magical about setup_func and teardown_func. You can certainly do that by importing them from any module, if it called init.py or __init__. I would suggest you put it in some other module, say test_setup.py because you may not want to have those definitions at the package level of the test module, and "explicit is better than implicit" philosophy will be enforced.
ADDED: However, you have a bigger problem here: with_setup is useful only for test functions, not for test methods or inside of subclasses. As written, in your code, your setup function will not be called regardless where it is specified, see here.
If you intend to make your tests as class methods, you would have to rely on setUp and tearDown methods of the unittest. If you wish, you can inherit your test class from a special setup class and reuse this same special class again for different tests.

Related

How to mock module attributes and other modules in Elixir?

I'm very new to elixir so this is probably basic but couldn't see much online
if I have the following code
defmodule A do
def my_first_function do
# does stuff
end
end
defmodule B do
#my_module_attribute A.my_first_function()
end
then how do I mock module A in my tests so I can just tell the tests what I want it to return?
also is it possible to just mock #my_module_attribute instead? would be good to have an answer to both approaches (and then what is considered the better pattern)
I think perhaps you are wanting to use module attributes for more than what they are useful for, and perhaps there is some confusion over the exact definition of "mock".
Avoid thinking of module attributes as "class variables" -- even though they look like they might serve the same purpose and they are in the same place, their behavior is different. Be especially careful when a module attribute relies on a function call. One common pitfall is to use module attributes to store values read from configuration, e.g. using Application.fetch_env!/2. The problem is that the module attributes are evaluated at compile time (not at run time), so you can easily end up with an unexpected value. (The compiler now warns about this gotcha explicitly and there is now the Application.compile_env!/2 provided to better communicate that particular use case).
I usually reserve module attributes for raising the visibility of simple constants and I tend to avoid using them for storing the result of any function execution.
When it comes back down to "mocking", you still have to think about the fact that Elixir is compiled -- it's not Javascript. Someone geekier than me can explain the mechanics, but the module attributes don't exist the same way at runtime as they do at compile time.
"Mocking" during tests usually means substituting one module or function for another, and this swap is often easier to do at runtime. One common pattern looks a bit like Dependency Injection (but purists may object to the comparison).
Consider a function like this that relies on some OtherModule to do its work:
def my_function(input, opts \\ []) do
other_module = Keyword.get(opts, :service, OtherModule)
other_module.get_thing(input)
# do more stuff...
end
Instead of hard-coding the OtherModule inside of my_function, its name is read out of the optional opts. This provides a useful way to override the output of that OtherModule when you need to test it.
In your test, you can do something like this:
test "example mock" do
assert something = MyApp.MyMod.my_function("foo", service: MockService)
end
When the test provides MockService as the :service module, the get_thing function will be called on the MockService module and not on the OtherModule. If the provided module does not define a get_thing function, the code will fail to execute. This is where having a behaviour comes in handy because it helps guarantee that your module has implemented the needed functions. The Mox testing library, for example, relies on the behaviour contracts, but from the example above you can see premise.
If you squint, you can see that this "injection" approach is somewhat similar to how Javascript often accepts a callback function as an argument, but in Elixir it is more common to pass around module names instead of captured functions.

Extension lifecycle and state in JUnit 5

User guide contains following:
Usually, an extension is instantiated only once.
It's not very clear when extension can be instantiated many times? I'm supporting test suite with multiple extensions and every extension stores it's state in class fields. Everything works fine, but can I rely on this or should I refactor this code to use ExtensionContext.Store?
Usually, an extension is instantiated only once. So the question becomes relevant: How do you keep the state from one invocation of an extension to the next?
I think this sentence shall highlight that the same instance of an extension might be re-used for multiple tests. I doubt that the instance might be replaced in the middle of a test.
Multiple instances of an extension might be instantiated when a test uses programmatic extension registration (with #RegisterExtension). In such case, the test class creates its own instance of the extension. JUnit cannot reuse this instance in other test classes. But an instance created by declarative extension registration (with #ExtendWith) might be used for multiple test classes.

Is it bad idea to use Dependency Injection objects in unit tests?

I am not sure if what i am doing is actually the "correct" way of doing unit tests with DI. Right now i ask my ViewModelLocator to actually create all the instances i need, and just get the instance i need to test, which makes it very simple to test a single instance because lets asume that Receipt needs a Reseller object to be created, reseller needs a User object to be created, user need some other object to be created, which creates a chain of objects to create just to test one single instance.
With di usally interfaces will get mocked and parsed to the object which you would like to create, but how about simple Entities/ViewModels?
Whats the best practice to do unit testing with DI involved?
public class JournalTest
{
private ReceiptViewModel receipt;
private ViewModelLocator locator;
[SetUp]
public void SetUp()
{
locator = new ViewModelLocator();
receipt = SimpleIoc.Default.GetInstance<ReceiptViewModel>();
}
[TearDown]
[Test]
public void CheckAndCreateNewJournal_Should_Always_Create_New_Journal()
{
receipt.Sale.Journal = null;
receipt.Sale.CheckAndCreateNewJournal();
Assert.NotNull(receipt.Sale.Journal);
}
}
First, you aren't using Dependency Injection in your code. What you have there is called Service Locator (Service Locators create a tight coupling to the IoC/Service Locator and makes it hard to test).
And yes, it's bad (both Service Locator and Dependency Injection), because it means: You are not doing a UnitTest, you are doing an integration Test.
In your case the ReceiptViewModel will not be tested alone, but your test also tests the dependencies of ReceiptViewModel (i.e. Repository, Services injected etc.). This is called an integration test.
A UnitTest has to test only the class in question and no dependencies. You can achieve this either by stubs (dummy implementation of your dependencies, assuming you have used interfaces as dependencies) or using mocks (with a Mock framework like Moq).
Which is easier/better as you don't have to implement the whole class, but just have to setup mocks for the methods you know that will be required for your test case.
As an additional note, entities you'll got to create yourself. Depending on your UnitTest framework, there may be data driven tests (via Attributes on the test method) or you just create them in code, or if you have models/entities used in many classes, create a helper method for it.
View Models shouldn't be injected into constructor (at least avoided), as it couples them tightly
Units tests should run quickly and should be deterministic. That means you have to mock/stub everything that brokes these two rules.
The best way to mock/stub dependancies is to inject them. In the production, classes are assembled by DI framework, but in unit tests you should assemble them manually and inject mocks where needed.
There is also a classic unit test approach where you stub/mock every dependency of your class, but it's useless since you don't gain anything by that.
Martin Fowler wrote great article about that: link
You should also read Growing Object-oriented software: Guided by tests. Ton of useful knowledge.

Extending an objc unit test class runs the superclass's tests again

I am writing Objective-C unit tests in Xcode. I would like to extend my base class so I can reuse my setUp/tearDown methods. I created a header and put the imports there, since Xcode defaults to making just an implementation file.
Extending the base class works fine, except that all of its test methods are run a second time (or n times, depending on number of children).
My question is if this is the appropriate practice, or should I try something like base classes with no test methods?

In Groovy, if class is not called why instantiation exception?

When I execute the following experimental code subset at http://groovyconsole.appspot.com/
class FileHandler {
def rootDir
FileHandler(String batchName) {
rootDir = '.\\Results\\'+batchName+'\\'
}
}
//def fileHandler = new FileHandler('Result-2012-12-15-10-48-55')
An exception results:
java.lang.NoSuchMethodException: FileHandler.<init>()
When I uncomment the last line that instantiates the class, the error goes away.
Can someone explain why this is? I'm basically attempting to segregate the definition and instantiation of the class into 2 files to be evaluated separately. Thanks
I'm not sure of the exact implementation details behind http://groovyconsole.appspot.com/ (source linked to from there points to Gaelyk, which I've not looked over). I'd bet it's looking for a no-arg constructor for the class you've presented, in an effort to find something runnable. (note that if you provide that, it still won't work, as it wants a main() :/)
Running locally in groovyConsole will die a bit sooner, with the following error message:
groovy.lang.GroovyRuntimeException: This script or class could not be run. It should either:
- have a main method,
- be a JUnit test or extend GroovyTestCase,
- implement the Runnable interface,
- or be compatible with a registered script runner.
This is perhaps more descriptive and to the point. If you want to run some Groovy as a simple script, you'll need to supply a jumping-in point. The easiest way is an executable statement in your groovy file, outside of any class definition (e.g, uncommenting your instantiation statement). Alternatively, a class with a main method should do it. (see here).
If 2 files is how you want to break it up, you can save the class file def in one groovy file (e.g., First.groovy) and create a second (e.g., Second.groovy) with just your executable statements. (I believe the first one will be in the classpath automatically when you run groovy Second, if both are in same directory)