How would I test this method? - testing

Essentially I've begun to work on a wrapper for the Riot Games API and I'm struggling with how to test it. I've got the repository plugged into Travis so on push it runs go test but I'm not sure how to go about testing it since the API_KEY required for the requests changes daily and I can't auto-regenerate it, i'd have to manually add it every day if I tested the endpoints directly.
So I was wondering if it was possible to mock the responses, but in that case I'm guessing I'd need to refactor my code?
So i've made a struct to represent their SummonerDTO
type Summoner struct {
ID int64 `json:"id"`
AccountID int64 `json:"accountId"`
ProfileIconID int `json:"profileIconId"`
Name string `json:"name"`
Level int `json:"summonerLevel"`
RevisionDate int64 `json:"revisionDate"`
}
That struct has a method:
func (s Summoner) ByName(name string, region string) (summoner *Summoner, err error) {
endpoint := fmt.Sprintf("https://%s.api.riotgames.com/lol/summoner/%s/summoners/by-name/%s", REGIONS[region], VERSION, name)
client := &http.Client{}
req, err := http.NewRequest("GET", endpoint, nil)
if err != nil {
return nil, fmt.Errorf("unable to create new client for request: %v", err)
}
req.Header.Set("X-Riot-Token", API_KEY)
resp, err := client.Do(req)
if err != nil {
return nil, fmt.Errorf("unable to complete request to endpoint: %v", err)
}
defer resp.Body.Close()
if resp.StatusCode != 200 {
return nil, fmt.Errorf("request to api failed with: %v", resp.Status)
}
respBody, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("unable to read response body: %v", err)
}
if err := json.Unmarshal([]byte(respBody), &summoner); err != nil {
return nil, fmt.Errorf("unable to unmarshal response body to summoner struct: %v", err)
}
return summoner, nil
}
Is it a case that the struct method doesn't have a single responsibility, in a sense it's building the endpoint, firing off the request and parsing the response. Do I need to refactor it in order to make it testable, and in which case what's the best approach for that? Should I make a Request and Response struct and then test those?
To clarify the API Keys used are rate limited and need to be regenerated daily, Riot Games do not allow you to use a crawler to auto-regenerate your keys. I'm using Travis for continuous integration so I'm wondering if there's a way to mock the request/response.
Potentially my approach is wrong, still learning.
Hopefully that all makes some form of sense, happy to clarify if not.

Writing unit tests consists of:
Providing known state for all of your inputs.
Testing that, given all meaning combinations of those inputs, you receive the expected outputs.
So you need to first identify your inputs:
s Summoner
name string
region string
Plus any "hidden" inputs, by way of globals:
client := &http.Client{}
And your outputs are:
summoner *Summoner
err error
(There can also be "hidden" outputs, if you write files, or change global variables, but you don't appear to do that here).
Now the first three inputs are easy to create from scratch for your tests: Just provide an empty Summoner{} (since you don't read or set that at all in your function, there's no need to set it other than to an empty value). name and region can simply be set to strings.
The only part remaining is your http.Client. At minimum, you should probably pass that in as an argument. Not only does this give you control over your tests, but it allows you to use easily use different client in production in the future.
But to ease testing, you might consider actually passing in a client-like interface, which you can easily mock. The only method you call on client is Do, so you could easily create a Doer interface:
type doer interface {
Do(req *Request) (*Response, error)
}
Then change your function signature to take that as one argument:
func (s Summoner) ByName(client doer, name string, region string) (summoner *Summoner, err error) {
Now, in your test you can create a custom type that fulfills the doer interface, and responds with any http.Response you like, without needing to use a server in your tests.

Related

How to make an api call faster in Golang?

I am trying to upload bunch of files using the company's api to the storage service they provide. (basically to my account). I have got lots of files like 40-50 or something.
I got the full path of the files and utilize the os.Open, so that, I can pass the io.Reader. I did try to use client.Files.Upload() without goroutines but it took so much time to upload them and decided to use goroutines. Here the implementation that I tried. When I run the program it just uploads one file which is the one that has the lowest size or something that it waits for a long time. What is wrong with it? Is it not like every time for loops run it creates a goroutine continue its cycle and creates for every file? How to make it as fast as possible with goroutines?
var filePaths []string
var wg sync.WaitGroup
// fills the string of slice with fullpath of files.
func fill() {
filepath.Walk(rootpath, func(path string, info os.FileInfo, err error) error {
if !info.IsDir() {
filePaths = append(filePaths, path)
}
if err != nil {
fmt.Println("ERROR:", err)
}
return nil
})
}
func main() {
fill()
tokenSource := oauth2.StaticTokenSource(&oauth2.Token{AccessToken: token})
oauthClient := oauth2.NewClient(context.TODO(), tokenSource)
client := putio.NewClient(oauthClient)
for _, path := range filePaths {
wg.Add(1)
go func() {
defer wg.Done()
f, err := os.Open(path)
if err != nil {
log.Println("err:OPEN", err)
}
upload, err := client.Files.Upload(context.TODO(), f, path, 0)
if err != nil {
log.Println("error uploading file:", err)
}
fmt.Println(upload)
}()
}
wg.Wait()
}
Consider a worker pool pattern like this: https://go.dev/play/p/p6SErj3L6Yc
In this example application, I've taken out the API call and just list the file names. That makes it work on the playground.
A fixed number of worker goroutines are started. We'll use a channel to distribute their work and we'll close the channel to communicate the end of the work. This number could be 1 or 1000 routines, or more. The number should be chosen based on how many concurrent API operations your putio API can reasonably be expected to support.
paths is a chan string we'll use for this purpose.
workers range over paths channel to receive new file paths to upload
package main
import (
"fmt"
"os"
"path/filepath"
"sync"
)
func main() {
paths := make(chan string)
var wg = new(sync.WaitGroup)
for i := 0; i < 10; i++ {
wg.Add(1)
go worker(paths, wg)
}
if err := filepath.Walk("/usr", func(path string, info os.FileInfo, err error) error {
if err != nil {
return fmt.Errorf("Failed to walk directory: %T %w", err, err)
}
if info.IsDir() {
return nil
}
paths <- path
return nil
}); err != nil {
panic(fmt.Errorf("failed Walk: %w", err))
}
close(paths)
wg.Wait()
}
func worker(paths <-chan string, wg *sync.WaitGroup) {
defer wg.Done()
for path := range paths {
// do upload.
fmt.Println(path)
}
}
This pattern can handle an indefinitely large amount of files without having to load the entire list in memory before processing it. As you can see, this doesn't make the code more complicated - actually, it's simpler.
When I run the program it just uploads one file which is the one
Function literals inherit the scope in which they were defined. This is why our code only listed one path - the path variable scope in the for loop was shared to each go routine, so when that variable changed, all routines picked up the change.
Avoid function literals unless you actually want to inherit scope. Functions defined at the global scope don't inherit any scope, and you must pass all relevant variables to those functions instead. This is a good thing - it makes the functions more straightforward to understand and makes variable "ownership" transitions more explicit.
An appropriate case to use a function literal could be for the os.Walk parameter; its arguments are defined by os.Walk so definition scope is one way to access other values - such as paths channel, in our case.
Speaking of scope, global variables should be avoided unless their scope of usage is truly global. Prefer passing variables between functions to sharing global variables. Again, this makes variable ownership explicit and makes it easy to understand which functions do and don't access which variables. Neither your wait group nor your filePaths have any cause to be global.
f, err := os.Open(path)
Don't forget to close any files you open. When you're dealing with 40 or 50 files, letting all those open file handles pile up until the program ends isn't so bad, but it's a time bomb in your program that will go off when the number of files exceeds the ulimit of allowed open files. Because the function execution greatly exceeds the part where the file needs to be open, defer doesn't make sense in this case. I would use an explicit f.Close() after uploading the file.

Golang - API Server and Socket at the same time

I try to make sockets to communicate with my clients.
A socket would be created after some requests to my API. It means, a client connects itself (only by request), but then, he joins a chat, so a socket is created and linked to the good channel.
I already used sockets so I understand how it works (C, C++, C#, Java), but what I want to make, with what I saw on the web, I think it's possible, but I don't understand how to handle it with golang.
I create a first server:
func main() {
r := mux.NewRouter()
r.HandleFunc("/", HomeHandler)
r.HandleFunc("/products", ProductsHandler)
r.HandleFunc("/articles", ArticlesHandler)
http.Handle("/", r)
}
But for socket, I need another one?
package main
import "net"
import "fmt"
import "bufio"
import "strings" // only needed below for sample processing
func main() {
fmt.Println("Launching server...")
// listen on all interfaces
ln, _ := net.Listen("tcp", ":8081")
// accept connection on port
conn, _ := ln.Accept()
// run loop forever (or until ctrl-c)
for {
// will listen for message to process ending in newline (\n)
message, _ := bufio.NewReader(conn).ReadString('\n')
// output message received
fmt.Print("Message Received:", string(message))
// sample process for string received
newmessage := strings.ToUpper(message)
// send new string back to client
conn.Write([]byte(newmessage + "\n"))
}
}
Thank for help !
Based on our chat discussion.
OVERsimplified example with lots of pseudocode
import (
"net"
"encoding/json"
"errors"
)
type User struct {
name string
}
type Message {
Action string
Params map[string]string
}
type Server struct {
connected_users map[*User]net.Conn
users_connected_with_each_other map[*User][]*User
good_users map[string]*User
}
func (srv *Server) ListenAndServe(addr string) error {
ln, err := net.Listen("tcp", addr)
if err != nil {
return err
}
return srv.Serve(tcpKeepAliveListener{ln.(*net.TCPListener)})
}
func (srv *Server) Serve(l net.Listener) error {
defer l.Close()
for {
rw, e := l.Accept()
if e != nil {
return e
}
// you want to create server_conn here with buffers, channels and stuff
// to use async thread safe read/write from it
go srv.serve_conn(rw)
}
}
func (srv *Server) serve_conn(rw net.Conn) error {
dec := json.NewDecoder(rw)
var message Message
//read 1st message he sent, should be token to connect
dec.Decode(&message)
token := get_token(Message)
user, ok := srv.good_users[token]
if !ok {
return errors.New("BAD USER!")
}
// store connected user
srv.connected_users[user] = rw
for {
// async reader will be nice
dec.Decode(&message)
switch message.Action {
case "Message":
// find users to send message to
if chats_with, err := users_connected_with_each_other[user]; err == nil {
for user_to_send_message_to := range chats_with {
// find connections to send message to
if conn, err := srv.connected_users[user_to_send_message_to]; err == nil {
// send json encoded message
err := json.NewEncoder(conn).Encode(message)
//if write failed store message for later
}
}
}
//other cases
default:
// log?
}
}
}
func main() {
known_users_with_tokens := make(map[string]*User)
srv := &Server{
connected_users: make(map[*User]net.Conn),
users_connected_with_each_other: make(map[*User][]*User),
good_users: known_users_with_tokens, // map is reference type, so treat it like pointer
}
// start our server
go srv.ListenAndServe(":54321")
ConnRequestHandler := function(w http.ResponseWriter, r *http.Request) {
user := create_user_based_on_request(r)
token := create_token(user)
// now user will be able to connect to server with token
known_users_with_tokens[token] = user
}
ConnectUsersHandler := function(user1,user2) {
// you should guard your srv.* members to avoid concurrent read/writes to map
srv.users_connected_with_each_other[user1] = append(srv.users_connected_with_each_other[user1], user2)
srv.users_connected_with_each_other[user2] = append(srv.users_connected_with_each_other[user2], user1)
}
//initialize your API http.Server
r := mux.NewRouter()
r.HandleFunc("/", HomeHandler)
r.HandleFunc("/products", ProductsHandler)
r.HandleFunc("/articles", ArticlesHandler)
r.HandleFunc("/connection_request", ConnRequestHandler) // added
http.Handle("/", r)
}
Call ConnectUsersHandler(user1, user2) to allow them communicate with each other.
known_users_with_tokens[token] = user to allow user to connect to server
You need to implement async reader/writer for connections to your server. Usefull structs to keep good Users.
Guard Server struct members and provide thread safe access to update it.
UDP
Looks like json.NewEncoder(connection).Encode(&message) and json.NewDecoder(connection).Decode(&message) is async and thread safe. So you can write simultaneously from different goroutines. No need to manual sync, YAY!
default http server accepts connection on one "host:port" only
Answer depends on what protocol you are going to use to communicate via your sockets.
I suggest to do it this way: (much simplified)
Leave http.Server alone to serve your API (it implements protocols HTTP 1.*/2 so you dont need to worry about it)
Implement your own "MultiSocketServer", do to so:
2.1 Implement GracefulListener (must implement net.Listener) (you need to shutdown your sockets when you dont need them anymore, right?)
2.2 Implement MultiSocketServer.Serve(l GracefulListener) (hello http.Server.Serve() ) to serve individual connection (your protocol to communicate with client via sockets goes here. something like net.textproto will be easy to implement since you GracefulListener.Accept() will return net.Conn)
2.3 Add methods MultiSocketServer.ListenAndServe(addr), MultiSocketServer.StopServe(l GracefulListener) to your MultiSocketServer
type MultiSocketServer struct {
listeners GracefulListener[] //or map?
// lots of other stuff
}
// looks familiar? (http.Server.ListenAndServe)
func (s *MultiSocketServer) ListenAndServe(addr string) {
ln, err := net.Listen("tcp", addr)
graceful_listner = &GracefulListener(ln)
s.listeners = append(s.listeners, graceful_listner)
go s.Serve(graceful_listner)
return graceful_listner
}
func (s *MultiSocketServer) StopServe(graceful_listner GracefulListener) {
graceful_listner.Stop()
//pseudocode
remove_listener_from_slice(s.listeners, graceful_listner)
}
Ofcourse, you need to add error checking and mutex (propably) to guard MultiSocketServer.listeners to make it thread safe.
In your main() start your API http.Server, and initialize your MultiSocketServer. Now from your http.Handler/http.HandlerFunc of http.Server you should be able to call MultiSocketServer.ListenAndServe(addr) to listen and serve your sockets connections.
UPDATE based on question
however, I'm not sure to understand the part "In your main()". If I understand it good, you said I have my API, and after starting it, I initialize MultiSocketServer. But where? after the starting of my API? Or you mean it would be better that I use the logic of your code as an API? Every request trough a socket
BTW: updated MultiSocketServer.ListenAndServe to start Listen and return graceful_listner
func main() {
//init MultiSocketServer
multi_socket_server = &MultiSocketServer{} //nil for GracefulListener[] is fine for now, complex initialization will be added later
// no listners yet, serves nothing
// create new Handeler for your "socket requests"
SocketRequestHandler := function(w http.ResponseWriter, r *http.Request) {
// identify client, assign him an address to connect
addr_to_listen := parse_request(r) //pseudocode
listener := multi_socket_server.ListenAndServe(addr_to_listen)
// TODO: handle errors
// now your multi_socket_server listen to addr_to_listen and serves it with multi_socket_server.Serve method in its own goroutine
// as i said MultiSocketServer.Serve method must implement your protocol (plaintext Reader/Writer on listener for now?)
save_listener_in_context_or_whatever_you_like_to_track_it(listener) //pseudo
}
SocketDisconnectHandler := function(w http.ResponseWriter, r *http.Request) {
// identify client
some_client := parse_request(r) //pseudocode
// get listener based on info
listener := get_listener_from_context_or_whatever(some_client) //pseudo
multi_socket_server.StopServe(listener)
// TODO: handle errors
}
//initialize your API http.Server
r := mux.NewRouter()
r.HandleFunc("/", HomeHandler)
r.HandleFunc("/products", ProductsHandler)
r.HandleFunc("/articles", ArticlesHandler)
r.HandleFunc("/socket_request", SocketRequestHandler) // added
r.HandleFunc("/socket_disconnect", SocketDisconnectHandler) //added
http.Handle("/", r)
// it creates new http.Server with DefaultServeMux as Handler (which is configured with your http.Handle("/", r) call)
http.ListenAndServe(":8080") // start serving API via HTTP proto
}
Actually, you may call multi_socket_server.ListenAndServe(addr_to_listen) and multi_socket_server.StopServe(listener) from any handler in your API server.
Every time you call multi_socket_server.ListenAndServe(addr_to_listen) it will create new listener and serve on it, you have to control it (dont listen on the same address more then once (i think it will error out anyway))
Your MultiSocketServer.Serve may looks like:
func (s *MultiSocketServer) Serve(l net.Listener) {
defer l.Close()
for {
// will listen for message to process ending in newline (\n)
message, _ := bufio.NewReader(conn).ReadString('\n')
// output message received
fmt.Print("Message Received:", string(message))
// sample process for string received
newmessage := strings.ToUpper(message)
// send new string back to client
conn.Write([]byte(newmessage + "\n"))
}
}
Possible GracefulListener implementation github
Or are you trying to achieve something completely different? =)

How to maintain good Go package test coverage when dealing with obscure errors?

I'm trying to maintain 100% code coverage on some of my Go packages. This isn't viable everywhere, even with some tests that I select with a -integration build tag on a build system, but it should be possible for my relatively isolated library packages.
I'm having trouble dealing coverage for obscure error paths, though.
Here is an example of one of my methodss that's part of an integration test where there's a real filesystem:
func (idx Index) LoadPost(title string) (*PostSpec, string, error) {
postFolder := strings.Replace(strings.ToLower(title), " ", "_", -1)
spec, err := idx.getSpec(postFolder)
if err != nil {
return nil, "", err
}
f, err := os.Open(path.Join(idx.blogdir, postFolder, "content.html"))
if err != nil {
return nil, "", err
}
defer f.Close()
b, err := ioutil.ReadAll(f)
if err != nil {
return nil, "", err
}
return spec, string(b), nil
}
Here's what it looks like in go tool -cover:
Hitting that block is not easy. I can't think of any way to do it other than creating a special test directory where the file it's trying to open is something other than a regular file. That seems like a lot of complexity.
This isn't too much of a deal on its own, but it means that I have to remember that 97.3% coverage is the right figure. If I see that number go down, does it mean I've broken my tests and there's now more uncovered code? Or just that I've managed to improve my package through simplification and removal or dead code? It leads to second guessing.
More importantly to some, in a business context it's an obstacle to a nice build dashboard.
io/ioutil/ioutil_test.go does test that error simply by calling ioutil.ReadFile() function with a non-existing file.
That shouldn't require any setup.
filename := "rumpelstilzchen"
contents, err := ReadFile(filename)
if err == nil {
t.Fatalf("ReadFile %s: error expected, none found", filename)
}

Handling connection reset errors in Go

On a plain Go HTTP handler, if I disconnect a client while still writting to the response, http.ResponseWritter.Write will return an error with a message like write tcp 127.0.0.1:60702: connection reset by peer.
Now from the syscall package, I have sysca.ECONNRESET, which has the message connection reset by peer, so they're not exactly the same.
How can I match them, so I know not to panic if it occurs ? On other ocasions I have been doing
if err == syscall.EAGAIN {
/* handle error differently */
}
for instance, and that worked fine, but I can't do it with syscall.ECONNRESET.
Update:
Because I'm desperate for a solution, for the time being I'll be doing this very dirty hack:
if strings.Contains(err.Error(), syscall.ECONNRESET.Error()) {
println("it's a connection reset by peer!")
return
}
The error you get has the underlying type *net.OpError, built here, for example.
You should be able to type-assert the error to its concrete type like this:
operr, ok := err.(*net.OpError)
And then access its Err field, which should correspond to the syscall error you need:
operr.Err.Error() == syscall.ECONNRESET.Error()
The answer by #zian is more useful than the accepted answer, but now on Go 1.13+ it is preferable to avoid manually unwrapping the errors:
if errors.Is(opErr,syscall.ECONNRESET) {
fmt.Println("Found a ECONNRESET")
}
This has the benefit that you can also use it more generally, such as after:
resp, err := http.Get("http://127.0.0.1:4444")
Here this err would otherwise have an extra layer of wrapping (*url.Error) and would be missed by the condition #zian used without explicitly unwrapping it a third time.
I came across this issue and the accepted answer was sufficient to point me in the right direction. However, the code it provides to check if the Error embedded inside *net.OpError is ECONNRESET is not complete, at least not for Golang 1.9.
The error embedded at OpError.Err is actually of type *os.SyscallError (https://golang.org/pkg/os/#SyscallError). The Write() function implemented by struct *net.netFD (which is what's being written to when sending a response over the network) looks like this:
func (fd *netFD) Write(p []byte) (nn int, err error) {
nn, err = fd.pfd.Write(p)
runtime.KeepAlive(fd)
return nn, wrapSyscallError("write", err)
}
And wrapSyscallError:
func wrapSyscallError(name string, err error) error {
if _, ok := err.(syscall.Errno); ok {
err = os.NewSyscallError(name, err)
}
return err
}
The error inside the *os.SyscallError struct can be directly compared against syscall.ECONNRESET.
So, given an error returned from a network write (e.g. a call to http.ResponseWritter.Write), the full code block to determine if that error is ECONNRESET is:
if opErr, ok := err.(*net.OpError); ok {
if syscallErr, ok := opErr.Err.(*os.SyscallError); ok {
if syscallErr.Err == syscall.ECONNRESET {
fmt.Println("Found a ECONNRESET")
}
}
}
#zian - thanks for your good solution to João Pinto's (and my) question : How can I match them, so I know not to panic if it occurs ?
As at go version 1.13, an improvement is to use the errors.Is function which does error unwrapping and testing sequentially 'under the hood'. For example :
if errors.Is(opErr,syscall.ECONNRESET) {
fmt.Println("Found a ECONNRESET")
}
#SteveCoffman - adding to your good answer, cheers!
Working with Errors in Go 1.13 - The Go Blog - Golang

What is the idiomatic way to return either a struct or an error?

I have a function that returns either a Card, which is a struct type, or an error.
The problem is, how can I return from the function when an error occurs ? nil is not valid for structs and I don't have a valid zero value for my Card type.
func canFail() (card Card, err error) {
// return nil, errors.New("Not yet implemented"); // Fails
return Card{Ace, Spades}, errors.New("not yet implemented"); // Works, but very ugly
}
The only workaround I found is to use a *Card rather than a Card, a make it either nil when there is an error or make it point an actual Card when no error happens, but that's quite clumsy.
func canFail() (card *Card, err error) {
return nil, errors.New("not yet implemented");
}
Is there a better way ?
EDIT : I found another way, but don't know if this is idiomatic or even good style.
func canFail() (card Card, err error) {
return card, errors.New("not yet implemented")
}
Since card is a named return value, I can use it without initializing it. It is zeroed in its own way, I don't really care since the calling function is not supposed to use this value.
func canFail() (card Card, err error) {
return card, errors.New("not yet implemented")
}
I think this, your third exampe, is fine too. The understood rule is that when a function returns an error, other return values cannot be relied upon to have meaningful values unless documentation clearly explains otherwise. So returning a perhaps meaningless struct value here is fine.
For example,
type Card struct {
}
func canFail() (card Card, err error) {
return Card{}, errors.New("not yet implemented")
}
func canFail() (card Card, err error) {
if somethingWrong {
err = errors.New("Not yet implemented")
return
}
if foo {
card = baz
return
}
...
// or
return Card{Ace, Spades}, nil
}
For me, I prefer your second option.
func canFail() (card *Card, err error) {
return nil, errors.New("not yet implemented");
}
This way you can make sure that when errors happen, the canFail() callers won't be able to use the card since it's nil. We can't make sure that the callers will check the error first.
peterSO's answer is the closest, but it's not quite what I would use. I think this is best:
func canFail() (Card, error) {
return Card{}, errors.New("not yet implemented")
}
First, it's not using a pointer just so it can use nil for returns. I think that's a neat trick, but unless you actually need the struct to be a pointer (for modifying or other reason), then returning a value is better. Also I don't think the return values should be named, unless you are utilizing them, like this:
func canFail() (card Card, err error) {
return
}
and that is problematic for two reasons. First, you aren't always going to be in a situation where you can simply have the return value be whatever that variable is at the time. Second, if you have a larger function, you won't be able to use a naked return in the deeper levels, as you will get variable shadow errors.
Finally, using Card{} instead of nil or card is more verbose, but it better communicates what you are doing. If you use either of these:
return
return card, err
It's not clear without context if the function was successful or not, while this:
return Card{}, err
is pretty clear that the function failed. It's the same pattern you would use with primitive types:
return false, err
return 0, err
return '\x00', err
return "", err
return []byte{}, err
https://github.com/golang/go/wiki/CodeReviewComments#pass-values
As a possible alternative to returning the struct you might consider letting the caller allocate it and the function set params.
func canFail(card *Card) (err error) {
if someCondition {
// set one property
card.Suit = Diamond
// set all at once
*card = Card{Ace, Spade}
} else {
err = errors.New("something went wrong")
}
return
}
If you are not comfortable pretending that Go supports C++ style references you should also check card for being nil.
https://play.golang.org/p/o-2TYwWCTL
If your function does not behave like someone else would assume reading at its signature, IE, if an error has occurred I should ignore the value along it.
Pretty much like any io.Reader, which may return n>0 with an error
Then, you should simply document it to explain to the user what should be considered regarding the returned value along the error.
Changing the signature, thus the general API relationships, for such case, rare but not unavoidable, is not the way to Go.
Instead, you should adequatly document the behavior of the function.