Confusion on how Golang pointers are preserved in structs - oop

I'm currently learning golang (mostly a Java/C# developer) and I hit an issue with pointers and defer.
I'm trying to wrap the writes to a CSV file under a struct in a OO-like style. From the examples I found online, it seems that creating "methods" on a struct could be done like so:
type MyObject struct {
fp *os.File
csv *csv.Writer
}
func (mo MyObject) Open(filepath string) {
println(&mo)
var err error
mo.fp, err = os.Create(filepath)
if err != nil {
panic(err)
}
mo.csv = csv.NewWriter(mo.fp)
}
The problem I hit was once I left the Open method, the pointers for fp and csv went back to null. Subsequent calls to this class would throw a nil error. A full example can be found here.
After a lot of googling, I ended up looking at how golang implemented their logger. They used a pointer to the object like so:
type MyObject struct {
fp *os.File
csv *csv.Writer
}
func New() *MyObject {
return &MyObject{}
}
func (mo *MyObject) Open(filepath string) {
println(&mo)
var err error
mo.fp, err = os.Create(filepath)
if err != nil {
panic(err)
}
mo.csv = csv.NewWriter(mo.fp)
}
A refactoring of my code (see here) shows it works as expected. I'm still confused though why the first method didn't work. I'm guessing I'm misunderstanding something on how structs, pointers, and/or defer work. What am I missing?

It didn't work in the first case, because func (mo MyObject) Open(filepath string) only got a local copy of MyObject...and all changes made to it remained within that context.
But after you added * to the receiver, i.e (mo *MyObject) the changes within the function affected the original MyObject.
you can check here for more info
hope this helps

Related

How to make an api call faster in Golang?

I am trying to upload bunch of files using the company's api to the storage service they provide. (basically to my account). I have got lots of files like 40-50 or something.
I got the full path of the files and utilize the os.Open, so that, I can pass the io.Reader. I did try to use client.Files.Upload() without goroutines but it took so much time to upload them and decided to use goroutines. Here the implementation that I tried. When I run the program it just uploads one file which is the one that has the lowest size or something that it waits for a long time. What is wrong with it? Is it not like every time for loops run it creates a goroutine continue its cycle and creates for every file? How to make it as fast as possible with goroutines?
var filePaths []string
var wg sync.WaitGroup
// fills the string of slice with fullpath of files.
func fill() {
filepath.Walk(rootpath, func(path string, info os.FileInfo, err error) error {
if !info.IsDir() {
filePaths = append(filePaths, path)
}
if err != nil {
fmt.Println("ERROR:", err)
}
return nil
})
}
func main() {
fill()
tokenSource := oauth2.StaticTokenSource(&oauth2.Token{AccessToken: token})
oauthClient := oauth2.NewClient(context.TODO(), tokenSource)
client := putio.NewClient(oauthClient)
for _, path := range filePaths {
wg.Add(1)
go func() {
defer wg.Done()
f, err := os.Open(path)
if err != nil {
log.Println("err:OPEN", err)
}
upload, err := client.Files.Upload(context.TODO(), f, path, 0)
if err != nil {
log.Println("error uploading file:", err)
}
fmt.Println(upload)
}()
}
wg.Wait()
}
Consider a worker pool pattern like this: https://go.dev/play/p/p6SErj3L6Yc
In this example application, I've taken out the API call and just list the file names. That makes it work on the playground.
A fixed number of worker goroutines are started. We'll use a channel to distribute their work and we'll close the channel to communicate the end of the work. This number could be 1 or 1000 routines, or more. The number should be chosen based on how many concurrent API operations your putio API can reasonably be expected to support.
paths is a chan string we'll use for this purpose.
workers range over paths channel to receive new file paths to upload
package main
import (
"fmt"
"os"
"path/filepath"
"sync"
)
func main() {
paths := make(chan string)
var wg = new(sync.WaitGroup)
for i := 0; i < 10; i++ {
wg.Add(1)
go worker(paths, wg)
}
if err := filepath.Walk("/usr", func(path string, info os.FileInfo, err error) error {
if err != nil {
return fmt.Errorf("Failed to walk directory: %T %w", err, err)
}
if info.IsDir() {
return nil
}
paths <- path
return nil
}); err != nil {
panic(fmt.Errorf("failed Walk: %w", err))
}
close(paths)
wg.Wait()
}
func worker(paths <-chan string, wg *sync.WaitGroup) {
defer wg.Done()
for path := range paths {
// do upload.
fmt.Println(path)
}
}
This pattern can handle an indefinitely large amount of files without having to load the entire list in memory before processing it. As you can see, this doesn't make the code more complicated - actually, it's simpler.
When I run the program it just uploads one file which is the one
Function literals inherit the scope in which they were defined. This is why our code only listed one path - the path variable scope in the for loop was shared to each go routine, so when that variable changed, all routines picked up the change.
Avoid function literals unless you actually want to inherit scope. Functions defined at the global scope don't inherit any scope, and you must pass all relevant variables to those functions instead. This is a good thing - it makes the functions more straightforward to understand and makes variable "ownership" transitions more explicit.
An appropriate case to use a function literal could be for the os.Walk parameter; its arguments are defined by os.Walk so definition scope is one way to access other values - such as paths channel, in our case.
Speaking of scope, global variables should be avoided unless their scope of usage is truly global. Prefer passing variables between functions to sharing global variables. Again, this makes variable ownership explicit and makes it easy to understand which functions do and don't access which variables. Neither your wait group nor your filePaths have any cause to be global.
f, err := os.Open(path)
Don't forget to close any files you open. When you're dealing with 40 or 50 files, letting all those open file handles pile up until the program ends isn't so bad, but it's a time bomb in your program that will go off when the number of files exceeds the ulimit of allowed open files. Because the function execution greatly exceeds the part where the file needs to be open, defer doesn't make sense in this case. I would use an explicit f.Close() after uploading the file.

How do you get a Golang program to print the line number of the error it just called?

I was trying to throw errors in my Golang program with log.Fatal but, log.Fatal does not also print the line where the log.Fatal was ran. Is there no way of getting access to the line number that called log.Fatal? i.e. is there a way to get the line number when throwing an error?
I was trying to google this but was unsure how. The best thing I could get was printing the stack trace, which I guess is good but might be a little too much. I also don't want to write debug.PrintStack() every time I need the line number, I am just surprised there isn't any built in function for this like log.FatalStackTrace() or something that isn't costume.
Also, the reason I do not want to make my own debugging/error handling stuff is because I don't want people to have to learn how to use my special costume handling code. I just want something standard where people can read my code later and be like
"ah ok, so its throwing an error and doing X..."
The less people have to learn about my code the better :)
You can set the Flags on either a custom Logger, or the default to include Llongfile or Lshortfile
// to change the flags on the default logger
log.SetFlags(log.LstdFlags | log.Lshortfile)
Short version, there's nothing directly built in, however you can implement it with a minimal learning curve using runtime.Caller
func HandleError(err error) (b bool) {
if err != nil {
// notice that we're using 1, so it will actually log where
// the error happened, 0 = this function, we don't want that.
_, filename, line, _ := runtime.Caller(1)
log.Printf("[error] %s:%d %v", filename, line, err)
b = true
}
return
}
//this logs the function name as well.
func FancyHandleError(err error) (b bool) {
if err != nil {
// notice that we're using 1, so it will actually log the where
// the error happened, 0 = this function, we don't want that.
pc, filename, line, _ := runtime.Caller(1)
log.Printf("[error] in %s[%s:%d] %v", runtime.FuncForPC(pc).Name(), filename, line, err)
b = true
}
return
}
func main() {
if FancyHandleError(fmt.Errorf("it's the end of the world")) {
log.Print("stuff")
}
}
playground
If you need exactly a stack trace, take a look at https://github.com/ztrue/tracerr
I created this package in order to have both stack trace and source fragments to be able to debug faster and log errors with much more details.
Here is a code example:
package main
import (
"io/ioutil"
"github.com/ztrue/tracerr"
)
func main() {
if err := read(); err != nil {
tracerr.PrintSourceColor(err)
}
}
func read() error {
return readNonExistent()
}
func readNonExistent() error {
_, err := ioutil.ReadFile("/tmp/non_existent_file")
// Add stack trace to existing error, no matter if it's nil.
return tracerr.Wrap(err)
}
And here is the output:

Saving enumerated values to a database

I'm new to Go and I'm trying to write a little program to save enumerated values to a database.
The way I declare my values is as follows:
type FileType int64
const (
movie FileType = iota
music
book
etc
)
I use these values in my struct like this:
type File struct {
Name string
Type FileType
Size int64
}
I use gorp for my database stuff, but I guess the use of gorp isn't relevant to my problem. I put stuff in my DB like this:
dbmap.Insert(&File{"MyBook.pdf",movie,1000})
but when I try to retrieve stuff…
dbmap.Select(&dbFiles, "select * from Files")
I get the following error:
panic: reflect.Set: value of type int64 is not assignable to type main.FileType
When I use int64 as the type for the const(...) and for the File.Type field, everything works fine, but I'm new to Go and want to understand the problem.
The way I see it, I have two problems:
Why can't Go convert this stuff successfully? I looked at the source code of the Go reflection and sql packages and there are methods for this kind of conversion, but they seem to fail. Is this a bug? What is the problem?
I figured out, that one can implement the sql.Scanner interface by implementing the following method:
Scan(src interface{}) error
I tried to implement the method and I even was able to get the right value from src and convert it to a FileType, but I was confused if I should implement the method for "(f *FileType) or (f FileType). Either way the method gets invoked, however I'm not able to overwrite f (or at least the update gets lost later) and the File instances read from the DB always had a "0" as value for File.Type.
Do you have any ideas on those two points?
I recently had the same need, and the solution is to implement two interfaces:
sql/driver.Valuer
sql.Scanner
Here's a working example:
type FileType int64
func (u *FileType) Scan(value interface{}) error { *u = FileType(value.(int64)); return nil }
func (u FileType) Value() (driver.Value, error) { return int64(u), nil }
Slightly off-topic, but may be useful to others as I kept revisiting this question/answer when solving a similar problem when working with postgres enum fields in golang (which are returned as bytes).
// Status values
const (
incomplete Status = "incomplete"
complete Status = "complete"
reject Status = "reject"
)
type Status string
func (s *Status) Scan(value interface{}) error {
asBytes, ok := value.([]byte)
if !ok {
return errors.New("Scan source is not []byte")
}
*s = Status(string(asBytes))
return nil
}
func (s SubjectStatus) Value() (driver.Value, error) {
// validation would go here
return string(s), nil
}
Go needs to be specific with types, which can be a pain sometimes.
(f FileType) is cheaper than (f *FileType) for "native" types, pretty much unless you have a complex type, it's almost always better to not use a pointer.
What do you mean it doesn't overwrite it? did you resave the struct after you modified it?

Handling connection reset errors in Go

On a plain Go HTTP handler, if I disconnect a client while still writting to the response, http.ResponseWritter.Write will return an error with a message like write tcp 127.0.0.1:60702: connection reset by peer.
Now from the syscall package, I have sysca.ECONNRESET, which has the message connection reset by peer, so they're not exactly the same.
How can I match them, so I know not to panic if it occurs ? On other ocasions I have been doing
if err == syscall.EAGAIN {
/* handle error differently */
}
for instance, and that worked fine, but I can't do it with syscall.ECONNRESET.
Update:
Because I'm desperate for a solution, for the time being I'll be doing this very dirty hack:
if strings.Contains(err.Error(), syscall.ECONNRESET.Error()) {
println("it's a connection reset by peer!")
return
}
The error you get has the underlying type *net.OpError, built here, for example.
You should be able to type-assert the error to its concrete type like this:
operr, ok := err.(*net.OpError)
And then access its Err field, which should correspond to the syscall error you need:
operr.Err.Error() == syscall.ECONNRESET.Error()
The answer by #zian is more useful than the accepted answer, but now on Go 1.13+ it is preferable to avoid manually unwrapping the errors:
if errors.Is(opErr,syscall.ECONNRESET) {
fmt.Println("Found a ECONNRESET")
}
This has the benefit that you can also use it more generally, such as after:
resp, err := http.Get("http://127.0.0.1:4444")
Here this err would otherwise have an extra layer of wrapping (*url.Error) and would be missed by the condition #zian used without explicitly unwrapping it a third time.
I came across this issue and the accepted answer was sufficient to point me in the right direction. However, the code it provides to check if the Error embedded inside *net.OpError is ECONNRESET is not complete, at least not for Golang 1.9.
The error embedded at OpError.Err is actually of type *os.SyscallError (https://golang.org/pkg/os/#SyscallError). The Write() function implemented by struct *net.netFD (which is what's being written to when sending a response over the network) looks like this:
func (fd *netFD) Write(p []byte) (nn int, err error) {
nn, err = fd.pfd.Write(p)
runtime.KeepAlive(fd)
return nn, wrapSyscallError("write", err)
}
And wrapSyscallError:
func wrapSyscallError(name string, err error) error {
if _, ok := err.(syscall.Errno); ok {
err = os.NewSyscallError(name, err)
}
return err
}
The error inside the *os.SyscallError struct can be directly compared against syscall.ECONNRESET.
So, given an error returned from a network write (e.g. a call to http.ResponseWritter.Write), the full code block to determine if that error is ECONNRESET is:
if opErr, ok := err.(*net.OpError); ok {
if syscallErr, ok := opErr.Err.(*os.SyscallError); ok {
if syscallErr.Err == syscall.ECONNRESET {
fmt.Println("Found a ECONNRESET")
}
}
}
#zian - thanks for your good solution to João Pinto's (and my) question : How can I match them, so I know not to panic if it occurs ?
As at go version 1.13, an improvement is to use the errors.Is function which does error unwrapping and testing sequentially 'under the hood'. For example :
if errors.Is(opErr,syscall.ECONNRESET) {
fmt.Println("Found a ECONNRESET")
}
#SteveCoffman - adding to your good answer, cheers!
Working with Errors in Go 1.13 - The Go Blog - Golang

What is the idiomatic way to return either a struct or an error?

I have a function that returns either a Card, which is a struct type, or an error.
The problem is, how can I return from the function when an error occurs ? nil is not valid for structs and I don't have a valid zero value for my Card type.
func canFail() (card Card, err error) {
// return nil, errors.New("Not yet implemented"); // Fails
return Card{Ace, Spades}, errors.New("not yet implemented"); // Works, but very ugly
}
The only workaround I found is to use a *Card rather than a Card, a make it either nil when there is an error or make it point an actual Card when no error happens, but that's quite clumsy.
func canFail() (card *Card, err error) {
return nil, errors.New("not yet implemented");
}
Is there a better way ?
EDIT : I found another way, but don't know if this is idiomatic or even good style.
func canFail() (card Card, err error) {
return card, errors.New("not yet implemented")
}
Since card is a named return value, I can use it without initializing it. It is zeroed in its own way, I don't really care since the calling function is not supposed to use this value.
func canFail() (card Card, err error) {
return card, errors.New("not yet implemented")
}
I think this, your third exampe, is fine too. The understood rule is that when a function returns an error, other return values cannot be relied upon to have meaningful values unless documentation clearly explains otherwise. So returning a perhaps meaningless struct value here is fine.
For example,
type Card struct {
}
func canFail() (card Card, err error) {
return Card{}, errors.New("not yet implemented")
}
func canFail() (card Card, err error) {
if somethingWrong {
err = errors.New("Not yet implemented")
return
}
if foo {
card = baz
return
}
...
// or
return Card{Ace, Spades}, nil
}
For me, I prefer your second option.
func canFail() (card *Card, err error) {
return nil, errors.New("not yet implemented");
}
This way you can make sure that when errors happen, the canFail() callers won't be able to use the card since it's nil. We can't make sure that the callers will check the error first.
peterSO's answer is the closest, but it's not quite what I would use. I think this is best:
func canFail() (Card, error) {
return Card{}, errors.New("not yet implemented")
}
First, it's not using a pointer just so it can use nil for returns. I think that's a neat trick, but unless you actually need the struct to be a pointer (for modifying or other reason), then returning a value is better. Also I don't think the return values should be named, unless you are utilizing them, like this:
func canFail() (card Card, err error) {
return
}
and that is problematic for two reasons. First, you aren't always going to be in a situation where you can simply have the return value be whatever that variable is at the time. Second, if you have a larger function, you won't be able to use a naked return in the deeper levels, as you will get variable shadow errors.
Finally, using Card{} instead of nil or card is more verbose, but it better communicates what you are doing. If you use either of these:
return
return card, err
It's not clear without context if the function was successful or not, while this:
return Card{}, err
is pretty clear that the function failed. It's the same pattern you would use with primitive types:
return false, err
return 0, err
return '\x00', err
return "", err
return []byte{}, err
https://github.com/golang/go/wiki/CodeReviewComments#pass-values
As a possible alternative to returning the struct you might consider letting the caller allocate it and the function set params.
func canFail(card *Card) (err error) {
if someCondition {
// set one property
card.Suit = Diamond
// set all at once
*card = Card{Ace, Spade}
} else {
err = errors.New("something went wrong")
}
return
}
If you are not comfortable pretending that Go supports C++ style references you should also check card for being nil.
https://play.golang.org/p/o-2TYwWCTL
If your function does not behave like someone else would assume reading at its signature, IE, if an error has occurred I should ignore the value along it.
Pretty much like any io.Reader, which may return n>0 with an error
Then, you should simply document it to explain to the user what should be considered regarding the returned value along the error.
Changing the signature, thus the general API relationships, for such case, rare but not unavoidable, is not the way to Go.
Instead, you should adequatly document the behavior of the function.