I'm new to Golang and have been taking a TDD approach while learning the language. I've been getting along okay yet I find testing third party packages quite clunky, which leads me to believe that I've been taking the wrong approach.
The specific case I'm having trouble with is mocking a Redis client for error handling. The approach I've taken is to create my own interface, and the implementation wraps the clients methods that I want to use.
type Redis interface {
Get(key string) (string, error)
}
type RedisClient struct {
client *redis.Client
}
func (redisClient *RedisClient) New(client *redis.Client) *RedisClient {
redisClient.client = client
return redisClient
}
func (redisClient *RedisClient) Get(key string) (string, error) {
return redisClient.client.Get(key).Result()
}
I can then create a mock which implements that same interface to return whichever values I specify, particularly for testing error handling.
I've hit a roadblock where a specific method on the client to perform transactions (MULTI) returns another interface belonging to that package. What would I do in this scenario? Implementing that interface myself seems out of the question.
Similarly, as usage of this client grows, my own implementation can grow to the point that it implements the whole interface to Redis - this seems to go against the whole idea of delegating this out to an external dependency.
Is there a better way to test third-party packages like this, for things such as error handling?
One approach would be to create a type that focuses on what you want to accomplish instead on what methods of the client you are using.
Let's say all you want is a storage to save and fetch users, you could imagine an interface like this:
type UserStore interface {
SaveUser(*User) error
GetUserByID(id string) (*User, error)
SearchUsers(query string) ([]User, error)
}
You could then implement a Redis version of this storage and call whatever client methods you want inside, it doesn't matter. You can even implement one in PostgreSQL or whatever.
Also, mocking is way easier with this approach since you all you need to do is to implement this interface instead of the Redis one.
Here is an example of a mock version of this interface:
type UserStoreMock struct {
SaveUserFn func (*User) error
SaveUserInvoked bool
...
}
func (m *UserStoreMock) SaveUser(u *User) error {
m.SaveUserInvoked = true
if m.SaveUserFn != nil {
return m.SaveUserFn(u)
}
return nil
}
...
You can then use this mock in tests like this:
var m UserStoreMock
m.SaveUserFn = func(u *User) error {
if u.ID != "123" {
t.Fail("Bad ID")
}
return ErrDuplicateError
}
...
Related
I'm curious about an example given in Kotlin documentation regarding sealed classes:
fun log(e: Error) = when(e) {
is FileReadError -> { println("Error while reading file ${e.file}") }
is DatabaseError -> { println("Error while reading from database ${e.source}") }
is RuntimeError -> { println("Runtime error") }
// the `else` clause is not required because all the cases are covered
}
Let's imagine the classes are defined as follows:
sealed class Error
class FileReadError(val file: String): Error()
class DatabaseError(val source: String): Error()
class RuntimeError : Error()
Is there any benefit for using when over using polymorphism:
sealed class Error {
abstract fun log()
}
class FileReadError(val file: String): Error() {
override fun log() { println("Error while reading file $file") }
}
class DatabaseError(val source: String): Error() {
override fun log() { println("Error while reading from database $source") }
}
class RuntimeError : Error() {
override fun log() { println("Runtime error") }
}
The only reason I can think of is that we may not have access to the source code of those classes, in order to add our log method to them. Otherwise, it seems that polymorphism is a better choice over instance checking (see [1] or [2] for instance.)
This is described as "Data/Object Anti-Symmetry" in the book Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin.
In the first example (Data style), you are keeping your error classes dumb with an external function that handles all types. This style is in opposition to using polymorphism (Object style) but there are some advantages.
Suppose you were to add a new external function, one that returns an icon to show the user when the error happens. The first advantage is you can easily add this icon function without changing any line in any of your error classes and add it in a single place. The second advantage is in the separation. Maybe your error classes exist in the domain module of your project and you'd prefer your icon function to be in the ui module of your project to separate concerns.
So when keeping the sealed classes dumb, it's easy to add new functions and easy to separate them, but it's hard to add new classes of errors because then you need to find and update every function. On the other hand when using polymorphism, it's hard to add new functions and you can't separate them from the class, but it's easy to add new classes.
The benefit of the first (type-checking) example is that the log messages do not have to be hardcoded into the Error subclasses. In this way, clients could potentially log different messages for the same subclass of Error in different parts of an application.
The second (polymorphic) approach assumes everyone wants the same message for each error and that the developer of each subclass knows what that error message should be for all future use cases.
There is an element of flexibility in the first example that does not exist in the second. The previous answer from #Trevor examines the theoretical underpinning of this flexibility.
If I am modeling my value objects using Kotlin data classes what is the best way to handle validation. Seems like the init block is the only logical place since it executes after the primary constructor.
data class EmailAddress(val address: String) {
init {
if (address.isEmpty() || !address.matches(Regex("^[a-zA-Z0-9]+#[a-zA-Z0-9]+(.[a-zA-Z]{2,})$"))) {
throw IllegalArgumentException("${address} is not a valid email address")
}
}
}
Using JSR-303 Example
The downside to this is it requires load time weaving
#Configurable
data class EmailAddress(#Email val address: String) {
#Autowired
lateinit var validator: Validator
init {
validator.validate(this)
}
}
It seems unreasonable to me to have object creation validation anywhere else but in the class constructor. This is the place responsible for the creation, so that is the place where the rules which define what is and isn't a valid instance should be. From a maintenance perspective it also makes sense to me as it would be the place where I would look for such rules if I had to guess.
I did make a comment, but I thought I would share my approach to validation instead.
First, I think it is a mistake to perform validation on instantiation. This will make the boundary between deserialization and handing over to your controllers messy. Also, to me, if you are sticking to a clean architecture, validation is part of your core logic, and you should ensure with tests on your core logic that it is happening.
So, to let me tackle this how I wish, I first define my own core validation api. Pure kotlin. No frameworks or libraries. Keep it clean.
interface Validatable {
/**
* #throws [ValidationErrorException]
*/
fun validate()
}
class ValidationErrorException(
val errors: List<ValidationError>
) : Exception() {
/***
* Convenience method for getting a data object from the Exception.
*/
fun toValidationErrors() = ValidationErrors(errors)
}
/**
* Data object to represent the data of an Exception. Convenient for serialization.
*/
data class ValidationErrors(
val errors : List<ValidationError>
)
data class ValidationError(
val path: String,
val message: String
)
Then I have a framework specific implementations. For example a javax.validation.Validation implementation:
open class ValidatableJavax : Validatable {
companion object {
val validator = Validation.buildDefaultValidatorFactory().validator!!
}
override fun validate() {
val violations = validator.validate(this)
val errors = violations.map {
ValidationError(it.propertyPath.toString(), it.message)
}.toMutableList()
if (errors.isNotEmpty()) {
throw ValidationErrorException(errors = errors)
}
}
}
The only problem with this, is that the javax annotations don't play so well with kotlin data objects - but here is an example of a class with validation:
import javax.validation.constraints.Positive
class MyObject(
myNumber: BigDecimal
) : ValidatableJavax() {
#get:Positive(message = "Must be positive")
val myNumber: BigDecimal = myNumber
}
Actually, it looks like that validation is not a responsibility of data classes. data tells for itself — it's used for data storage.
So if you would like to validate data class, it will make perfect sense to set #get: validation on arguments of the constructor and validate outside of data class in class, responsible for construction.
Your second option is not to use data class, just use simple class and implement whole logic in the constructor passing validator there
Also, if you use Spring Framework — you can make this class Bean with prototype scope, but chances are it will be absolutely uncomfortable to work with such kind of spaghetti-code :)
I disagree with your following statement :
Seems like the init block is the only logical place since it executes after the primary constructor.
Validation should not be done at construction time, because sometimes, you need to have intermediate steps before getting a valid object, and it does not work well with Spring MVC for example.
Maybe use a specific interface (like suggested in previous answer) with a method dedicated to executing validation.
For the validation framework, I personnaly use valiktor, as I found it a lot less cumbersome that JSR-303
I'm making use of a third party library that doesn't have any interfaces for its classes. I can use them in my structs no problem, but they have side effects that I want to avoid when unit testing.
// Somewhere there are a couple structs, with no interfaces. I don't own the code.
// Each has only one method.
type ThirdPartyEntry struct {}
func (e ThirdPartyEntry) Resolve() string {
// Do some complex stuff with side effects
return "I'm me!"
}
// This struct returns an instance of the other one.
type ThirdPartyFetcher struct {}
func (f ThirdPartyFetcher) FetchEntry() ThirdPartyEntry {
// Do some complex stuff with side effects and return an entry
return ThirdPartyEntry{}
}
// Now my code.
type AwesomeThing interface {
BeAwesome() string
}
// I have a class that makes use of the third party.
type Awesome struct {
F ThirdPartyFetcher
}
func (a Awesome) BeAwesome() string {
return strings.Repeat(a.F.FetchEntry().Resolve(), 3)
}
func NewAwesome(fetcher ThirdPartyFetcher) Awesome {
return Awesome{
F: fetcher,
}
}
func main() {
myAwesome := NewAwesome(ThirdPartyFetcher{})
log.Println(myAwesome.BeAwesome())
}
This works! But I want to write some unit tests, and so I'd like to Mock both the third party structs. To do so I believe I need interfaces for them, but since ThirdPartyFetcher returns ThirdPartyEntrys, I cannot figure out how.
I created a pair of interfaces which match up with the two third party classes. I'd like to then rewrite the Awesome struct and method to use the generic Fetcher interface. In my test, I would call NewAwesome() passing in a testFetcher, a struct which also implements the interface.
type Awesome struct {
F Fetcher
}
func NewAwesome(fetcher Fetcher) Awesome {
return Awesome{
Fetcher: fetcher,
}
}
type Entry interface {
Resolve() string
}
// Double check ThirdPartyEntry implements Entry
var _ Entry = (*ThirdPartyEntry)(nil)
type Fetcher interface {
FetchEntry() Entry
}
// Double check ThirdPartyFetcher implements Fetcher
var _ Fetcher = (*ThirdPartyFetcher)(nil)
I omit the test code because it's not relevant. This fails on the last line shown.
./main.go:49: cannot use (*ThirdPartyFetcher)(nil) (type *ThirdPartyFetcher) as type Fetcher in assignment:
*ThirdPartyFetcher does not implement Fetcher (wrong type for FetchEntry method)
have FetchEntry() ThirdPartyEntry
want FetchEntry() Entry
The signatures are different, even though I already showed that ThirdPartyEntry implements Entry. I believe this is disallowed because to would lead to something like slicing (in the polymorphic sense, not the golang sense). Is there any way for me to write a pair of interfaces? It should be the case that the Awesome class doesn't even know ThirdParty exists - it's abstracted behind the interface and injected when main calls NewAwesome.
It's not very pretty, but one way would be to:
type fetcherWrapper struct {
ThirdPartyFetcher
}
func (fw fetcherWrapper) FetchEntry() Entry {
return fw.ThirdPartyFetcher.FetchEntry()
}
I'd say mocking things that return structs vs interfaces is a relatively common problem without any great solutions apart from a lot of intermediate wrapping.
Correct me if I'm wrong, but for my understanding of an API is that it is something that allows me to modify and request data through an interface, which is what I want to do in Go. For example I have a user interface:
interface IUser {
GetId() int
GetName() string
GetSignupDate() time
GetPermissions() []IPermission
Delete()
}
This already looks to me like active record and if I want to create a new user with a new id I would have to use new since Go doesn't support static functions as far as I know. This means I would also need a commit function in my interface, which makes it even worse for me. What am I doing wrong here?
In Go, interfaces are behavioural. That is, they describe what a thing does more than what it is. Your example looks like you're trying to write C# in Go, with your heavy use of I in front of interface classes. However, an interface that is only implemented by one type is a bit of a waste of time.
Instead, consider:
interface Deleteable { // You'd probably be tempted to call this IDeleteable
// Effective go suggests Deleter, but the grammar
// sounds weird
Delete() err
}
Now you can create a function to perform batch deletes:
func BatchDelete(victims []Deleteable) {
// Do some cool things for batching, connect to db, start a transaction
for _, victim := range(victims) {
victim.Delete() // Or arrange for this function to be called somehow.
}
}
You'd probably get started faster by creating an interface for Update, Serialize and so on, and storing your actual users/permissions/etc in concrete structs that implement those methods. (Note in Go you don't have to say that a type implements an interface, it happens "automatically"). You also don't have to have a single interface for each method (Updater, Serializable), but you can bundle them all into one interface:
type DBObject interface {
Update()
Serialize() RowType
Delete()
}
type User struct {
Id int
Name string
// ... etc
}
Remember, your model can always "Fill in" a User object to return from your API, even if the actual representation of the User object is something much more diffuse, e.g. RDF triples.
I agree with #ZanLynx comments. Go’s standard library seems to favour the interface way for APIs.
package main
import "fmt"
type S string
type I interface{ M() S }
func (s S) M() S { return s }
func API(i I) I { return i.M() }
func main() {
s := S("interface way")
fmt.Println(API(s))
}
It may be worth noting that APIs that take in a one-method interface could be re-written as taking a function type.
package main
import "fmt"
func API(f func() string) string { return f() }
func main() {
f := func() string { return "higher-order way" }
fmt.Println(API(f))
}
As an API author, one could provide both mechanisms and let the API consumer decide the style of invocation. See http://aquaraga.github.io/functional-programming/golang/2016/11/19/golang-interfaces-vs-functions.html.
Using the following simple Example (coded in php):
public function doSomething(Registry $registry)
{
$object = $registry->getData('object_key');
if ($object) {
//use the object to do something
}
}
public function doSomething($registry)
{
$object = $registry->getData('object_key');
if ($object) {
//use the object to do something
}
}
What are the benefits of either approach?
Both will ultimately fail just at different points:
The first example will fail if an object not of type Registry is passed, and the second will fail if the object passed does not implement a getData method.
How do you choose when to use either approach?
Those are 2 different design approaches. The responsibility falls on the developer(s) to make sure either methods won't fail.
Type hinting is a more robust approach while duck typing gives you more flexibility.