mocking file.bufferedReader() gives NullPointerException - kotlin

Why file.bufferedReader() is giving me NullPointerException here?
val file = mock<File>()
when(file.bufferedReader()).thenThrow(IOException::class.java)

According to this thread Unable to mock BufferedWriter class in junit
You can mock Java IO classes (including their constructors, so future
instances also get mocked) with the JMockit library, though you will
likely face difficulties such as a NullPointerException from the
Writer() constructor (depending on how the mocking was done, and which
IO classes were mocked).
However, note that the Java IO API contains many interacting classes
and deep inheritance hierarchies. In your example, the FileWriter
class would also probably need to be mocked, otherwise an actual file
would get created.
Also, usage of IO classes in application code is usually just an
implementation detail, which can easily be changed. You could switch
from IO streams to writers, from regular IO to NIO, or use the new
Java 8 utilities, for example. Or use a 3rd-party IO library.
Bottom line, it's just a terribly bad idea to try and mock IO classes.
It's even worse if (as suggested in another answer) you change the
client code to have Writers, etc. injected into the SUT. Dependency
injection is just not for this kind of thing.
Instead, use real files in the local file system, preferably from a
test directory which can be deleted after the test, and/or use fixed
resource files when only reading. Local files are fast and reliable,
and lead to more useful tests. Certain developers will say that "a
test is not a unit test if it touches the file system", but that's
just dogmatic advice.

Related

Dependency Injection: could it make your code harder to change down the line?

From what I understand, basic dependency injection means that instead of creating a dependency inside a class, you make it outside and pass it in as a parameter.
So let's say you had a class Logger which does some stuff and then writes to a log and it depends on a WriteToFile object. Instead of creating that WriteToFile object inside the Logger class, you create it outside and pass it in as a parameter everytime you create a new Logger instance.
The thing which confuses me is imagine there's 1,000 places in your code where you create a Logger object and imagine for some reason you no longer need to use the WriteToFile object in your Logger class...
Without DI, you would just have to remove the code in the Logger class which creates a WriteToFile object and uses it. This might take a couple of seconds.
But with DI, you have to find those 1,000 places where you created the Logger object and remove the code where you created the WriteToFile object and passed it in as a parameter.
Is that correct or am I missing something important?
Applying DI doesn't necessarily mean that you turn classes completely inside out, in such way that all a class's dependencies are supplied from the outside. That would easily result in an unmaintainable mess.
This is why, from the perspective of DI, we separate dependencies in 2 distinct groups:
Stable Dependencies: are classes (and functionality) who's behavior is deterministic, and that you never expect to have to replace, wrap, decorate or intercept.
Volatile Dependencies: All dependencies that are not stable, are by definition volatile. These are classes (and functionality) who's behavior is either nondeterministic (e.g. Random.Next, DateTime.Now, or when calling a database), or are parts of your code base you wish to be able to replace, wrap, decorate, or intercept.
There is actually more to Stable and Volatile Dependencies. A more-detailed description of these concepts can be found here.
From the perspective of DI, we are only interested in Volatile Dependencies. These are the dependencies that we wish to hide behind an abstraction, and inject through the constructor. Abstracting and injecting Volatile Dependencies gives many interesting advantages, such as:
Flexibility/maintainability. e.g. by depending on an ILogger you write your logs to the database instead of a file by changing a single line of code (or flipping a configuration switch).
Testability: By allowing Volatile Dependencies to be replaced, it becomes much easier to test a single class in isolation.
Stable Dependencies, on the other hand, don't need to be swapped by definition, and tight coupling with them does not hinder testability, because their behavior is deterministic.
If we look at your specific case, your logger is a good example of a Volatile Dependency:
Loggers are typically classes that you wish to replace in a plugin like model; tomorrow you might want to log to a database instead.
Your logger writes to disk; this is nondeterministic behavior.
You might want to replace the logger with a fake implementation when testing.
This means that most classes in your system should not take a dependency on your logger implementation, but rather take a dependency on a logging abstraction.
Your logger class contains, as you explained, a WriteToFile object. To understand whether or not it should be supplied to the logger from outside using DI, means you need to find out whether or not WriteToFile is -from perspective of the logger- a Volatile Dependency. Likely this WriteToFile object is an intrinsic part of your logger class. Together they form a single component. Classes within a single component are often expected to be tightly coupled. In that case, from perspective of the logger, WriteToFile is a Stable Dependency.
After you determined WriteToFile to be a Stable Dependency from perspective of your logger, there is no need to hide it behind an abstraction and inject it into the Logger. That would only introduce overhead without adding any benefits.
Applying DI to Stable Dependencies "make[s] your code harder to change down the line," while applying DI on Volatile Dependency makes your code easier to maintain.

Akka Remote shared classes

I have two different Java 8 projects that will live on different servers and which will both use Akka (specifically Akka Remoting) to talk to each other.
For instance, one app might send a Fizzbuzz message to the other app:
public class Fizzbuzz {
private int foo;
private String bar;
// Getters, setters & ctor omitted for brevity
}
I've never used Akka Remoting before. I assume I need to create a 3rd project, a library/jar for holding the shared messages (such as Fizzbuzz and others) and then pull that library in to both projects as a dependency.
Is it that simple? Are there any serialization (or other Akka and/or networking) considerations that affect the design of these "shared" messages? Thanks in advance!
Shared library is a way to go for sure, except there are indeed serialization concerns:
Akka-remoting docs:
When using remoting for actors you must ensure that the props and messages used for those actors are serializable. Failing to do so will cause the system to behave in an unintended way.
For more information please see Serialization.
Basically, you'll need to provide and configure the serialization for actor props and messages sent (including all the nested classes of course). If I'm not mistaking default settings will get you up and running without any configuration on your side, provided that everything you send over the wire is java-serializable.
However, default config uses default Java serialization, which is known to be quite inefficient - so you might want to switch to protobuf, kryo, or maybe even json. In that case, it would make sense to provide the serialization implementation and bindings as a shared library - either a dedicated one or a part of the "shared models" one that you mentioned in the question - depends if you want to reuse it elsewhere and mind/don't mind having serailization-related transitive dependencies popping all over the place.
Finally, if you allow some personal opinion, I would suggest trying protobuf first - it's binary format (read: efficient) and is widely supported (there are bindings for other languages). Kryo works well too (I have a few closed-source akka-cluster apps with kryo serialization in production), but has a few quirks with regards to collection/map handling.

Testing without injection

Most of what I've read about mocks, stubs (test doubles) involves some form of injection of the DOC either through the SUT method itself or constructor or setter methods. And injecting that breaks boundaries like InjectMock are frowned upon as a regular test strategy. But what if you are building a class that you do not want to expose those DOCs? Is there a way to 'unit' test such a module? Without AOP? Is such a test not a real 'unit' test anymore? Is the resistance I'm feeling really design smell and I should expose those DOCs somehow?
For example, lets say I have the following Class that I want to test (unit or otherwise):
public class RemoteRepository {
Properties props = null;
public RemoteRepository(Properties props) { this.props=props; }
public Item export (String itemName) {
JSch ssh = new JSch();
ssh.setIdentity(props.get("keyfile"));
ssh.connect();
ssh.execute("export "+itemName+" "+props.get("exportFilename"));
...
}
Here is a unit I'd like to write a unit test for, but I want to stub or mock out the JSch component. But the objects I create in the method to just do things that the method needs to accomplish are not exposed outside the method even. So I cannot inject a stub to replace them. I could change the export method signature to accept the stub, or add a constructor that does, but that changes my design just to suit a test.
Although the unit will connect to a real server to do the export in prod, when just testing the unit I either want to stub the DOC out completely, or simulate it with a real DOC that is simple and controlled.
This latter approach is like using an in memory db instead of a real one in that it acts and behaves like the eventual db that will be used, but can be confined to just what is needed for the test (eg. just the tables of interest, no heavy security, etc). So I could setup some kind of test double sshd in my test so that when the build runs the test, it has something to test against. This can be a lot of trouble to setup and maintain however and seems like overkill - sometimes trying to stub out a real DOC is harder than just using the real DOC somehow.
Am I stuck trying to setup a test framework that provides an sshd test double? Am I looking at this the wrong way? Do I just use AOP or mock library methods that break the class scope boundaries?
To restate the basic problem is that a lot of times I want to test a method that has complex DOCs (ie. those that interact with other systems: network, db, etc) and I don't want to change the design just to accommodate test double DOC injection. How do you approach testing in such a scenario?
My recommendation, based on personal experience, is to write integration tests where DOCs (Depended On Components) are not mocked.
However, if for whatever reason the teams insists on having unit tests instead, you would have to either use a suitable mocking tool (AOP tools are able, but not a good fit here), or change the design of SUT and DOCs in order to use "weaker" mocking tools.

BDD and outside-in approach, how to start with testing

All,
I'm trying to grasp all the outside-in TDD and BDD stuff and would like you to help me to get it.
Let's say I need to implement Config Parameters functionality working as follows:
there are parameters in file and in database
both groups have to be merged into one parameters set
parameters from database should override those from files
Now I'd like to implement this with outside-in approach, and I stuck just at the beginning. Hope you can help me to get going.
My questions are:
What test should I start with? I just have sth as follows:
class ConfigurationAssemblerTest {
#Test
public void itShouldResultWithEmptyConfigurationWhenBothSourcesAreEmpty() {
ConfigurationAssembler assembler = new ConfigurationAssembler();
// what to put here ?
Configuration config = assembler.getConfiguration();
assertTrue(config.isEmpty());
}
}
I don't know yet what dependencies I'll end with. I don't know how I'm gonna write all that stuff yet and so on.
What should I put in this test to make it valid? Should I mock something? If so how to define those dependencies?
If you could please show me the path to go with this, write some plan, some tests skeletons, what to do and in what order it'd be super-cool. I know it's a lot of writing, so maybe you can point me to any resources? All the resources about outside-in approach I've found were about simple cases with no dependencies etc.
And two questions to mocking approach.
if mocking is about interactions and their verification, does it mean that there should not be state assertions in such tests (only mock verifications) ?
if we replace something that doesn't exist yet with mock just for test, do we replace it later with real version?
Thanks in advance.
Ok, that's indeed a lot of stuff. Let's start from the end:
Mocking is not only about 'interactions and their verification', this would be only one half of the story. In fact, you're using it in two different ways:
Checking, if a certain call was made, and eventually also checking the arguments of the call (this is the 'interactions and verification' part).
Using mocks to replace dependencies of the class-under-test (CUT), eventually setting up return values on the mock objects as required. Here, you use mock objects to isolate the CUT from the rest of the system (so that you can handle the CUT as an isolated 'unit', which sort of runs in a sandbox).
I'd call the first form dynamic or 'interaction-based' unit testing, it uses the Mocking frameworks call verification methods. The second one is more traditional, 'static' unit testing which asserts a fact.
You shouldn't ever have the need to 'replace something that doesn't exist yet' (apart from the fact that this is - logically seen - completely impossible). If you feel like you need to do this, then this is a clear indication that you're trying to make the second step before the first.
Regarding your notion of 'outside-in approach': To be honest, I've never heard of this before, so it doesn't seem to be a very prominent concept - and obviously not a very helpful one, because it seems to confuse things more than clarifying them (at least for the moment).
Now onto your first question: (What test should I start with?):
First things first - you need some mechanism to read the configuration values from file and database, and this functionality should be encapsulated in separate helper classes (you need, among other things, a clean Separation of concerns for effectively doing TDD - this usually is totally underemphasized when introducing TDD/BDD). I'd suggest an interface (e.g. IConfigurationReader) which has two implementations (one for the file stuff and one for the database, e.g. FileConfigurationReader and DatabaseConfigurationReader). In TDD (not necessarily with a BDD approach) you would also have corresponding test fixtures. These fixtures would cover test cases like 'What happens if the underlying data store contains no/invalid/valid/other special values?'. This is what I'd advice you to start with.
Only then - with the reading mechanism in operation and your ConfigurationAssembler class having the necessary dependencies - you would start to write tests for/implement the ConfigurationAssembler class. Your test then could look like this (Because I'm a C#/.NET guy, I don't know the appropriate Java tools. So I'm using pseudo-code here):
class ConfigurationAssemblerTest {
#Test
public void itShouldResultWithEmptyConfigurationWhenBothSourcesAreEmpty() {
IConfigurationReader fileConfigMock = new [Mock of FileConfigurationReader];
fileConfigMock.[WhenAskedForConfigValues].[ReturnEmpty];
IConfigurationReader dbConfigMock = new [Mock of DatabaseConfigurationReader];
dbConfigMock.[WhenAskedForConfigValues].[ReturnEmpty];
ConfigurationAssembler assembler = new ConfigurationAssembler(fileConfigMock, dbConfigMock);
Configuration config = assembler.getConfiguration();
assertTrue(config.isEmpty());
}
}
Two things are important here:
The two reader objects are injected to the ConfigurationAssembler from outside via its constructor - this technique is called Dependency Injection. It is very helpful and important architectural principle, which generally leads to a better and cleaner architecture (and greatly helps in unit testing, especially when using mock objects).
The test now asserts exactly what it states: The ConfigurationAssembler returns ('assembles') an empty config when the underlying reading mechanisms on their part return an empty result set. And because we're using mock objects to provide the config values, the test runs in complete isolation. We can be sure that we're testing only the correct functioning of the ConfigurationAssembler class (its handling of empty values, namely), and nothing else.
Oh, and maybe it's easier for you to start with TDD instead of BDD, because BDD is only a subset of TDD and builds on top of the concepts of TDD. So you can only do (and understand) BDD effectively when you know TDD.
HTH!

Testing software: fake vs stub

There are quite a few written about stub vs mocks, but I can't see the real difference between fake and stub. Can anyone put some light on it?
I assume you are referring to the terminology as introduced by Meszaros. Martin Fowler does also mentions them regularly. I think he explains the difference pretty well in that article.
Nevertheless, I'll try again in my own words :)
A Fake is closer to a real-world implementation than a stub. Stubs contain basically hard-coded responses to an expected request; they are commonly used in unit tests, but they are incapable of handling input other than what was pre-programmed.
Fakes have a more real implementation, like some kind of state that may be kept for example. They can be useful for system tests as well as for unit testing purposes, but they aren't intended for production use because of some limitation or quality requirement.
A fake has the same behavior as the thing that it replaces.
A stub has a "fixed" set of "canned" responses that are specific to your test(s).
A mock has a set of expectations about calls that are made. If these expectations are not met, the test fails.
All of these are similar in that they replace production collaborators that code under test uses.
To paraphrase Roy Osherove in his book The Art of Unit Testing (second edition):
A Fake is any object made to imitate another object. Fakes can be used either as stubs or mocks.
A Stub is a fake that is provided to the class you are testing to satisfy its requirements, but is otherwise ignored in the unit test.
A Mock is a fake that is provided to the class you are testing, and will be inspected as part of the unit test to verify functionality.
For example, the MyClass class you are testing may utilize both a local logger and a third-party web service as part of its operation. You would create a FakeLogger and a FakeWebService, but how they are used determines whether they are stubs or mocks.
The FakeLogger might be used as a stub: it is provided to MyClass and pretends to be a logger, but actually ignores all input and is otherwise just there to get MyClass to operate normally. You don't actually check FakeLogger in your unit tests, and as far as you're concerned it's there to make the compiler shut up.
The FakeWebService might be used as a mock: you provide it to MyClass, and in one of your units tests you call MyClass.Foo() which is supposed to call the third party web service. To verify that this happened, you now check your FakeWebService to see if it recorded the call that it was supposed to receive.
Note that either of these could be reversed and depend on what it is you're testing in a particular unit test. If your unit test is testing the content of what is being logged then you could make a FakeLogger that dutifully records everything it's told so you can interrogate it during the unit test; this is now a mock. In the same test you might not care about when the third-party web service is called; your FakeWebService is now a stub. How you fill in the functions of your fake thus depends on whether it needs to be used as a stub or a mock or both.
In summary (direct quote from the book):
A fake is a generic term that can be used to describe either a stub or a mock object because they both look like the real object. . . . The basic difference is that stubs can't fail tests. Mocks can.
All the rest is implementation details.
These might help
http://xunitpatterns.com/Mocks,%20Fakes,%20Stubs%20and%20Dummies.html
http://hamletdarcy.blogspot.com/2007/10/mocks-and-stubs-arent-spies.html