How do you get a Golang program to print the line number of the error it just called? - error-handling

I was trying to throw errors in my Golang program with log.Fatal but, log.Fatal does not also print the line where the log.Fatal was ran. Is there no way of getting access to the line number that called log.Fatal? i.e. is there a way to get the line number when throwing an error?
I was trying to google this but was unsure how. The best thing I could get was printing the stack trace, which I guess is good but might be a little too much. I also don't want to write debug.PrintStack() every time I need the line number, I am just surprised there isn't any built in function for this like log.FatalStackTrace() or something that isn't costume.
Also, the reason I do not want to make my own debugging/error handling stuff is because I don't want people to have to learn how to use my special costume handling code. I just want something standard where people can read my code later and be like
"ah ok, so its throwing an error and doing X..."
The less people have to learn about my code the better :)

You can set the Flags on either a custom Logger, or the default to include Llongfile or Lshortfile
// to change the flags on the default logger
log.SetFlags(log.LstdFlags | log.Lshortfile)

Short version, there's nothing directly built in, however you can implement it with a minimal learning curve using runtime.Caller
func HandleError(err error) (b bool) {
if err != nil {
// notice that we're using 1, so it will actually log where
// the error happened, 0 = this function, we don't want that.
_, filename, line, _ := runtime.Caller(1)
log.Printf("[error] %s:%d %v", filename, line, err)
b = true
}
return
}
//this logs the function name as well.
func FancyHandleError(err error) (b bool) {
if err != nil {
// notice that we're using 1, so it will actually log the where
// the error happened, 0 = this function, we don't want that.
pc, filename, line, _ := runtime.Caller(1)
log.Printf("[error] in %s[%s:%d] %v", runtime.FuncForPC(pc).Name(), filename, line, err)
b = true
}
return
}
func main() {
if FancyHandleError(fmt.Errorf("it's the end of the world")) {
log.Print("stuff")
}
}
playground

If you need exactly a stack trace, take a look at https://github.com/ztrue/tracerr
I created this package in order to have both stack trace and source fragments to be able to debug faster and log errors with much more details.
Here is a code example:
package main
import (
"io/ioutil"
"github.com/ztrue/tracerr"
)
func main() {
if err := read(); err != nil {
tracerr.PrintSourceColor(err)
}
}
func read() error {
return readNonExistent()
}
func readNonExistent() error {
_, err := ioutil.ReadFile("/tmp/non_existent_file")
// Add stack trace to existing error, no matter if it's nil.
return tracerr.Wrap(err)
}
And here is the output:

Related

How to make an api call faster in Golang?

I am trying to upload bunch of files using the company's api to the storage service they provide. (basically to my account). I have got lots of files like 40-50 or something.
I got the full path of the files and utilize the os.Open, so that, I can pass the io.Reader. I did try to use client.Files.Upload() without goroutines but it took so much time to upload them and decided to use goroutines. Here the implementation that I tried. When I run the program it just uploads one file which is the one that has the lowest size or something that it waits for a long time. What is wrong with it? Is it not like every time for loops run it creates a goroutine continue its cycle and creates for every file? How to make it as fast as possible with goroutines?
var filePaths []string
var wg sync.WaitGroup
// fills the string of slice with fullpath of files.
func fill() {
filepath.Walk(rootpath, func(path string, info os.FileInfo, err error) error {
if !info.IsDir() {
filePaths = append(filePaths, path)
}
if err != nil {
fmt.Println("ERROR:", err)
}
return nil
})
}
func main() {
fill()
tokenSource := oauth2.StaticTokenSource(&oauth2.Token{AccessToken: token})
oauthClient := oauth2.NewClient(context.TODO(), tokenSource)
client := putio.NewClient(oauthClient)
for _, path := range filePaths {
wg.Add(1)
go func() {
defer wg.Done()
f, err := os.Open(path)
if err != nil {
log.Println("err:OPEN", err)
}
upload, err := client.Files.Upload(context.TODO(), f, path, 0)
if err != nil {
log.Println("error uploading file:", err)
}
fmt.Println(upload)
}()
}
wg.Wait()
}
Consider a worker pool pattern like this: https://go.dev/play/p/p6SErj3L6Yc
In this example application, I've taken out the API call and just list the file names. That makes it work on the playground.
A fixed number of worker goroutines are started. We'll use a channel to distribute their work and we'll close the channel to communicate the end of the work. This number could be 1 or 1000 routines, or more. The number should be chosen based on how many concurrent API operations your putio API can reasonably be expected to support.
paths is a chan string we'll use for this purpose.
workers range over paths channel to receive new file paths to upload
package main
import (
"fmt"
"os"
"path/filepath"
"sync"
)
func main() {
paths := make(chan string)
var wg = new(sync.WaitGroup)
for i := 0; i < 10; i++ {
wg.Add(1)
go worker(paths, wg)
}
if err := filepath.Walk("/usr", func(path string, info os.FileInfo, err error) error {
if err != nil {
return fmt.Errorf("Failed to walk directory: %T %w", err, err)
}
if info.IsDir() {
return nil
}
paths <- path
return nil
}); err != nil {
panic(fmt.Errorf("failed Walk: %w", err))
}
close(paths)
wg.Wait()
}
func worker(paths <-chan string, wg *sync.WaitGroup) {
defer wg.Done()
for path := range paths {
// do upload.
fmt.Println(path)
}
}
This pattern can handle an indefinitely large amount of files without having to load the entire list in memory before processing it. As you can see, this doesn't make the code more complicated - actually, it's simpler.
When I run the program it just uploads one file which is the one
Function literals inherit the scope in which they were defined. This is why our code only listed one path - the path variable scope in the for loop was shared to each go routine, so when that variable changed, all routines picked up the change.
Avoid function literals unless you actually want to inherit scope. Functions defined at the global scope don't inherit any scope, and you must pass all relevant variables to those functions instead. This is a good thing - it makes the functions more straightforward to understand and makes variable "ownership" transitions more explicit.
An appropriate case to use a function literal could be for the os.Walk parameter; its arguments are defined by os.Walk so definition scope is one way to access other values - such as paths channel, in our case.
Speaking of scope, global variables should be avoided unless their scope of usage is truly global. Prefer passing variables between functions to sharing global variables. Again, this makes variable ownership explicit and makes it easy to understand which functions do and don't access which variables. Neither your wait group nor your filePaths have any cause to be global.
f, err := os.Open(path)
Don't forget to close any files you open. When you're dealing with 40 or 50 files, letting all those open file handles pile up until the program ends isn't so bad, but it's a time bomb in your program that will go off when the number of files exceeds the ulimit of allowed open files. Because the function execution greatly exceeds the part where the file needs to be open, defer doesn't make sense in this case. I would use an explicit f.Close() after uploading the file.

Confusion on how Golang pointers are preserved in structs

I'm currently learning golang (mostly a Java/C# developer) and I hit an issue with pointers and defer.
I'm trying to wrap the writes to a CSV file under a struct in a OO-like style. From the examples I found online, it seems that creating "methods" on a struct could be done like so:
type MyObject struct {
fp *os.File
csv *csv.Writer
}
func (mo MyObject) Open(filepath string) {
println(&mo)
var err error
mo.fp, err = os.Create(filepath)
if err != nil {
panic(err)
}
mo.csv = csv.NewWriter(mo.fp)
}
The problem I hit was once I left the Open method, the pointers for fp and csv went back to null. Subsequent calls to this class would throw a nil error. A full example can be found here.
After a lot of googling, I ended up looking at how golang implemented their logger. They used a pointer to the object like so:
type MyObject struct {
fp *os.File
csv *csv.Writer
}
func New() *MyObject {
return &MyObject{}
}
func (mo *MyObject) Open(filepath string) {
println(&mo)
var err error
mo.fp, err = os.Create(filepath)
if err != nil {
panic(err)
}
mo.csv = csv.NewWriter(mo.fp)
}
A refactoring of my code (see here) shows it works as expected. I'm still confused though why the first method didn't work. I'm guessing I'm misunderstanding something on how structs, pointers, and/or defer work. What am I missing?
It didn't work in the first case, because func (mo MyObject) Open(filepath string) only got a local copy of MyObject...and all changes made to it remained within that context.
But after you added * to the receiver, i.e (mo *MyObject) the changes within the function affected the original MyObject.
you can check here for more info
hope this helps

Testing log.Fatalf in go?

I'd like to achieve 100% test coverage in go code. I am not able to cover the following example - can anyone help me with that?
package example
import (
"io/ioutil"
"log"
)
func checkIfReadable(filename string) (string, error) {
_, err := ioutil.ReadFile(filename)
if err != nil {
log.Fatalf("Cannot read the file... how to add coverage test for this line ?!?")
}
return "", nil
}
func main() {
checkIfReadable("dummy.txt")
}
Some dumy test for that:
package example
import (
"fmt"
"testing"
)
func TestCheckIfReadable(t *testing.T) {
someResult, err := checkIfReadable("dummy.txt")
if len(someResult) > 0 {
fmt.Println("this will not print")
t.Fail()
}
if err != nil {
fmt.Println("this will not print")
t.Fail()
}
}
func TestMain(t *testing.T) {
...
}
The issue is that log.Fatalf calls os.Exit and go engine dies.
I could modify the code and replace built-in library with my own - what makes the tests less reliable.
I could modify the code and create a proxy and a wrapper and a .... in other words very complex mechanism to change all calls to log.Fatalf
I could stop using built-in log package... what is equal to asking "how much is go built-in worth?"
I could live with not having 100% coverage
I could replace log.Fataf with something else - but then what is the point for built-in log.Fatalf?
I can try to mangle with system memory and depending on my OS replace memory address for the function (...) so do something obscure and dirty
Any other ideas?
Use log.Print instead of log.Fatal and return the error value that you declared in signature of function checkIfReadable. Or don't the error it and return it to some place that knows better how to handle it.
The function log.Fatal is strictly for reporting your program's final breath.
Calling log.Fatal is a bit worse than calling panic (there is also log.panic), because it does not execute deferred calls. Remember, that overusing panic in Go is considered a bad style.
A good way to get 100% test coverage and not fail at the same time is to use recover() to catch the panic that is thrown by log.Fatalf().
Here are the docs for recover. I think it fits your use case nicely.

Handling connection reset errors in Go

On a plain Go HTTP handler, if I disconnect a client while still writting to the response, http.ResponseWritter.Write will return an error with a message like write tcp 127.0.0.1:60702: connection reset by peer.
Now from the syscall package, I have sysca.ECONNRESET, which has the message connection reset by peer, so they're not exactly the same.
How can I match them, so I know not to panic if it occurs ? On other ocasions I have been doing
if err == syscall.EAGAIN {
/* handle error differently */
}
for instance, and that worked fine, but I can't do it with syscall.ECONNRESET.
Update:
Because I'm desperate for a solution, for the time being I'll be doing this very dirty hack:
if strings.Contains(err.Error(), syscall.ECONNRESET.Error()) {
println("it's a connection reset by peer!")
return
}
The error you get has the underlying type *net.OpError, built here, for example.
You should be able to type-assert the error to its concrete type like this:
operr, ok := err.(*net.OpError)
And then access its Err field, which should correspond to the syscall error you need:
operr.Err.Error() == syscall.ECONNRESET.Error()
The answer by #zian is more useful than the accepted answer, but now on Go 1.13+ it is preferable to avoid manually unwrapping the errors:
if errors.Is(opErr,syscall.ECONNRESET) {
fmt.Println("Found a ECONNRESET")
}
This has the benefit that you can also use it more generally, such as after:
resp, err := http.Get("http://127.0.0.1:4444")
Here this err would otherwise have an extra layer of wrapping (*url.Error) and would be missed by the condition #zian used without explicitly unwrapping it a third time.
I came across this issue and the accepted answer was sufficient to point me in the right direction. However, the code it provides to check if the Error embedded inside *net.OpError is ECONNRESET is not complete, at least not for Golang 1.9.
The error embedded at OpError.Err is actually of type *os.SyscallError (https://golang.org/pkg/os/#SyscallError). The Write() function implemented by struct *net.netFD (which is what's being written to when sending a response over the network) looks like this:
func (fd *netFD) Write(p []byte) (nn int, err error) {
nn, err = fd.pfd.Write(p)
runtime.KeepAlive(fd)
return nn, wrapSyscallError("write", err)
}
And wrapSyscallError:
func wrapSyscallError(name string, err error) error {
if _, ok := err.(syscall.Errno); ok {
err = os.NewSyscallError(name, err)
}
return err
}
The error inside the *os.SyscallError struct can be directly compared against syscall.ECONNRESET.
So, given an error returned from a network write (e.g. a call to http.ResponseWritter.Write), the full code block to determine if that error is ECONNRESET is:
if opErr, ok := err.(*net.OpError); ok {
if syscallErr, ok := opErr.Err.(*os.SyscallError); ok {
if syscallErr.Err == syscall.ECONNRESET {
fmt.Println("Found a ECONNRESET")
}
}
}
#zian - thanks for your good solution to João Pinto's (and my) question : How can I match them, so I know not to panic if it occurs ?
As at go version 1.13, an improvement is to use the errors.Is function which does error unwrapping and testing sequentially 'under the hood'. For example :
if errors.Is(opErr,syscall.ECONNRESET) {
fmt.Println("Found a ECONNRESET")
}
#SteveCoffman - adding to your good answer, cheers!
Working with Errors in Go 1.13 - The Go Blog - Golang

Exit with error code in go?

What's the idiomatic way to exit a program with some error code?
The documentation for Exit says "The program terminates immediately; deferred functions are not run.", and log.Fatal just calls Exit. For things that aren't heinous errors, terminating the program without running deferred functions seems extreme.
Am I supposed to pass around some state that indicate that there's been an error, and then call Exit(1) at some point where I know that I can exit safely, with all deferred functions having been run?
I do something along these lines in most of my real main packages, so that the return err convention is adopted as soon as possible, and has a proper termination:
func main() {
if err := run(); err != nil {
fmt.Fprintf(os.Stderr, "error: %v\n", err)
os.Exit(1)
}
}
func run() error {
err := something()
if err != nil {
return err
}
// etc
}
In Python I commonly use a pattern, which being converted to Go looks like this:
func run() int {
// here goes
// the code
return 1
}
func main() {
os.Exit(run())
}
I think the most clear way to do it is to set the exitCode at the top of main, then defer closing as the next step. That lets you change exitCode anywhere in main, and it's last value will be exited with:
package main
import (
"fmt"
"os"
)
func main() {
exitCode := 0
defer func() { os.Exit(exitCode) }()
// Do whatever, including deferring more functions
defer func() {
fmt.Printf("Do some cleanup\n")
}()
func() {
fmt.Printf("Do some work\n")
}()
// But let's say something went wrong
exitCode = 1
// Do even more work/cleanup if you want
// At the end, os.Exit will be called with the last value of exitCode
}
Output:
Do some work
Do some cleanup
Program exited: status 1.
Go Playgroundhttps://play.golang.org/p/AMUR4m_A9Dw
Note that an important disadvantage of this is that you don't exit the process as soon as you set the error code.
As mentioned by fas, you have func Exit(exitcode int) from the os package.
However, if you need the defered function to be applied, you always can use the defer keyword like this:
http://play.golang.org/p/U-hAS88Ug4
You perform all your operation, affect a error variable and at the very end, when everything is cleaned up, you can exit safely.
Otherwise, you could also use panic/recover:
http://play.golang.org/p/903e76GnQ-
When you have an error, you panic, end you cleanup where you catch (recover) it.
Yes, actually. The os package provides this.
package main
import "os"
func main() {
os.Exit(1)
}
http://golang.org/pkg/os/#Exit
Edit: so it looks like you know of Exit. This article gives an overview of Panic which will let deferred functions run before returning. Using this in conjunction with an exit may be what you're looking for. http://blog.golang.org/defer-panic-and-recover
Another good way I follow is:
if err != nil {
// log.Fatal will print the error message and will internally call System.exit(1) so the program will terminate
log.Fatal("fatal error message")
}