I'm running some tests in golang and I want to avoid running the slow ones, for example this one uses bcrypt so it's slow:
// +build slow
package services
import (
"testing"
"testing/quick"
)
// using bcrypt takes too much time, reduce the number of iterations.
var config = &quick.Config{MaxCount: 20}
func TestSignaturesAreSame(t *testing.T) {
same := func(simple string) bool {
result, err := Encrypt(simple)
success := err == nil && ComparePassWithHash(simple, result)
return success
}
if err := quick.Check(same, config); err != nil {
t.Error(err)
}
}
To avoid running this in every iteration I've set up the // +build slow flag. This should only run when doing go test -tags slow but unfortunately it's running every time (the -v flag shows it's running).
Any idea what's wrong?
Your // +build slow needs to be followed by a blank line
To distinguish build constraints from package documentation, a series of build constraints must be followed by a blank line.
visit Build Constraints
Related
I am trying to upload bunch of files using the company's api to the storage service they provide. (basically to my account). I have got lots of files like 40-50 or something.
I got the full path of the files and utilize the os.Open, so that, I can pass the io.Reader. I did try to use client.Files.Upload() without goroutines but it took so much time to upload them and decided to use goroutines. Here the implementation that I tried. When I run the program it just uploads one file which is the one that has the lowest size or something that it waits for a long time. What is wrong with it? Is it not like every time for loops run it creates a goroutine continue its cycle and creates for every file? How to make it as fast as possible with goroutines?
var filePaths []string
var wg sync.WaitGroup
// fills the string of slice with fullpath of files.
func fill() {
filepath.Walk(rootpath, func(path string, info os.FileInfo, err error) error {
if !info.IsDir() {
filePaths = append(filePaths, path)
}
if err != nil {
fmt.Println("ERROR:", err)
}
return nil
})
}
func main() {
fill()
tokenSource := oauth2.StaticTokenSource(&oauth2.Token{AccessToken: token})
oauthClient := oauth2.NewClient(context.TODO(), tokenSource)
client := putio.NewClient(oauthClient)
for _, path := range filePaths {
wg.Add(1)
go func() {
defer wg.Done()
f, err := os.Open(path)
if err != nil {
log.Println("err:OPEN", err)
}
upload, err := client.Files.Upload(context.TODO(), f, path, 0)
if err != nil {
log.Println("error uploading file:", err)
}
fmt.Println(upload)
}()
}
wg.Wait()
}
Consider a worker pool pattern like this: https://go.dev/play/p/p6SErj3L6Yc
In this example application, I've taken out the API call and just list the file names. That makes it work on the playground.
A fixed number of worker goroutines are started. We'll use a channel to distribute their work and we'll close the channel to communicate the end of the work. This number could be 1 or 1000 routines, or more. The number should be chosen based on how many concurrent API operations your putio API can reasonably be expected to support.
paths is a chan string we'll use for this purpose.
workers range over paths channel to receive new file paths to upload
package main
import (
"fmt"
"os"
"path/filepath"
"sync"
)
func main() {
paths := make(chan string)
var wg = new(sync.WaitGroup)
for i := 0; i < 10; i++ {
wg.Add(1)
go worker(paths, wg)
}
if err := filepath.Walk("/usr", func(path string, info os.FileInfo, err error) error {
if err != nil {
return fmt.Errorf("Failed to walk directory: %T %w", err, err)
}
if info.IsDir() {
return nil
}
paths <- path
return nil
}); err != nil {
panic(fmt.Errorf("failed Walk: %w", err))
}
close(paths)
wg.Wait()
}
func worker(paths <-chan string, wg *sync.WaitGroup) {
defer wg.Done()
for path := range paths {
// do upload.
fmt.Println(path)
}
}
This pattern can handle an indefinitely large amount of files without having to load the entire list in memory before processing it. As you can see, this doesn't make the code more complicated - actually, it's simpler.
When I run the program it just uploads one file which is the one
Function literals inherit the scope in which they were defined. This is why our code only listed one path - the path variable scope in the for loop was shared to each go routine, so when that variable changed, all routines picked up the change.
Avoid function literals unless you actually want to inherit scope. Functions defined at the global scope don't inherit any scope, and you must pass all relevant variables to those functions instead. This is a good thing - it makes the functions more straightforward to understand and makes variable "ownership" transitions more explicit.
An appropriate case to use a function literal could be for the os.Walk parameter; its arguments are defined by os.Walk so definition scope is one way to access other values - such as paths channel, in our case.
Speaking of scope, global variables should be avoided unless their scope of usage is truly global. Prefer passing variables between functions to sharing global variables. Again, this makes variable ownership explicit and makes it easy to understand which functions do and don't access which variables. Neither your wait group nor your filePaths have any cause to be global.
f, err := os.Open(path)
Don't forget to close any files you open. When you're dealing with 40 or 50 files, letting all those open file handles pile up until the program ends isn't so bad, but it's a time bomb in your program that will go off when the number of files exceeds the ulimit of allowed open files. Because the function execution greatly exceeds the part where the file needs to be open, defer doesn't make sense in this case. I would use an explicit f.Close() after uploading the file.
I'd like to achieve 100% test coverage in go code. I am not able to cover the following example - can anyone help me with that?
package example
import (
"io/ioutil"
"log"
)
func checkIfReadable(filename string) (string, error) {
_, err := ioutil.ReadFile(filename)
if err != nil {
log.Fatalf("Cannot read the file... how to add coverage test for this line ?!?")
}
return "", nil
}
func main() {
checkIfReadable("dummy.txt")
}
Some dumy test for that:
package example
import (
"fmt"
"testing"
)
func TestCheckIfReadable(t *testing.T) {
someResult, err := checkIfReadable("dummy.txt")
if len(someResult) > 0 {
fmt.Println("this will not print")
t.Fail()
}
if err != nil {
fmt.Println("this will not print")
t.Fail()
}
}
func TestMain(t *testing.T) {
...
}
The issue is that log.Fatalf calls os.Exit and go engine dies.
I could modify the code and replace built-in library with my own - what makes the tests less reliable.
I could modify the code and create a proxy and a wrapper and a .... in other words very complex mechanism to change all calls to log.Fatalf
I could stop using built-in log package... what is equal to asking "how much is go built-in worth?"
I could live with not having 100% coverage
I could replace log.Fataf with something else - but then what is the point for built-in log.Fatalf?
I can try to mangle with system memory and depending on my OS replace memory address for the function (...) so do something obscure and dirty
Any other ideas?
Use log.Print instead of log.Fatal and return the error value that you declared in signature of function checkIfReadable. Or don't the error it and return it to some place that knows better how to handle it.
The function log.Fatal is strictly for reporting your program's final breath.
Calling log.Fatal is a bit worse than calling panic (there is also log.panic), because it does not execute deferred calls. Remember, that overusing panic in Go is considered a bad style.
A good way to get 100% test coverage and not fail at the same time is to use recover() to catch the panic that is thrown by log.Fatalf().
Here are the docs for recover. I think it fits your use case nicely.
I'm writing a little program to manage restarts to other processes.
Basically when an app process starts (call it A), it spawns a new process (call it D), which has a simple HTTP server. When D receives an http request, it kills A and restarts it.
Problem is, A now doesn't respond to CTRL-C, and I'm not sure why. It may be something simple or maybe I don't really understand the relationship between processes, the terminal, and signals. But it's running in the same terminal with the same stdin/stdout/stderr. Below is a full program demonstrating this behaviour.
package main
import (
"flag"
"log"
"net/http"
"os"
"os/exec"
"strconv"
"time"
)
/*
Running this program starts an app (repeatdly prints 'hi') and spawns a new process running a simple HTTP server
When the server receives a request, it kills the other process and restarts it.
All three processes use the same stdin/stdout/stderr.
The restarted process does not respond to CTRL-C :(
*/
var serv = flag.Bool("serv", false, "run server")
// run the app or run the server
func main() {
flag.Parse()
if *serv {
runServer()
} else {
runApp()
}
}
// handle request to server
// url should contain pid of process to restart
func handler(w http.ResponseWriter, r *http.Request) {
pid, err := strconv.Atoi(r.URL.Path[1:])
if err != nil {
log.Println("send a number...")
}
// find the process
proc, err := os.FindProcess(pid)
if err != nil {
log.Println("can't find proc", pid)
return
}
// terminate the process
log.Println("Terminating the process...")
err = proc.Signal(os.Interrupt)
if err != nil {
log.Println("failed to signal interupt")
return
}
// restart the process
cmd := exec.Command("restarter")
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Start(); err != nil {
log.Println("Failed to restart app")
return
}
log.Println("Process restarted")
}
// run the server.
// this will only work the first time and that's fine
func runServer() {
http.HandleFunc("/", handler)
if err := http.ListenAndServe(":9999", nil); err != nil {
log.Println(err)
}
}
// the app prints 'hi' in a loop
// but first it spawns a child process which runs the server
func runApp() {
cmd := exec.Command("restarter", "-serv")
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Start(); err != nil {
log.Println(err)
}
log.Println("This is my process. It goes like this")
log.Println("PID:", os.Getpid())
for {
time.Sleep(time.Second)
log.Println("hi again")
}
}
The program expects to be installed. For convenience you can fetch it with go get github.com/ebuchman/restarter.
Run the program with restarter. It should print its process id. Then curl http://localhost:9999/<procid> to initiate the restart. The new process will now not respond to CTRL-C. Why? What am I missing?
This doesn't really have anything to do with Go. You start process A from your terminal shell. Process A starts process D (not sure what happened to B, but never mind). Process D kills process A. Now your shell sees that the process it started has exited, so the shell prepares to listen to another command. Process D starts another copy of process A, but the shell doesn't know anything about it. When you type ^C, the shell will handle it. If you run another program, the shell will arrange so that ^C goes to that program. The shell knows nothing about your copy of process A, so it's never going to direct a ^C to that process.
You can check out the approach taken by two http server frameworks in order to listen and intercept signals (including SIGINT, or even SIGTERM)
kornel661/nserv, where the ZeroDowntime-example/server.go uses a channel:
// catch signals:
signals := make(chan os.Signal)
signal.Notify(signals, os.Interrupt, os.Kill)
zenazn/goji, where graceful/signal.go uses a similar approach:
var stdSignals = []os.Signal{syscall.SIGINT, syscall.SIGTERM}
var sigchan = make(chan os.Signal, 1)
func init() {
go waitForSignal()
}
I'm trying to maintain 100% code coverage on some of my Go packages. This isn't viable everywhere, even with some tests that I select with a -integration build tag on a build system, but it should be possible for my relatively isolated library packages.
I'm having trouble dealing coverage for obscure error paths, though.
Here is an example of one of my methodss that's part of an integration test where there's a real filesystem:
func (idx Index) LoadPost(title string) (*PostSpec, string, error) {
postFolder := strings.Replace(strings.ToLower(title), " ", "_", -1)
spec, err := idx.getSpec(postFolder)
if err != nil {
return nil, "", err
}
f, err := os.Open(path.Join(idx.blogdir, postFolder, "content.html"))
if err != nil {
return nil, "", err
}
defer f.Close()
b, err := ioutil.ReadAll(f)
if err != nil {
return nil, "", err
}
return spec, string(b), nil
}
Here's what it looks like in go tool -cover:
Hitting that block is not easy. I can't think of any way to do it other than creating a special test directory where the file it's trying to open is something other than a regular file. That seems like a lot of complexity.
This isn't too much of a deal on its own, but it means that I have to remember that 97.3% coverage is the right figure. If I see that number go down, does it mean I've broken my tests and there's now more uncovered code? Or just that I've managed to improve my package through simplification and removal or dead code? It leads to second guessing.
More importantly to some, in a business context it's an obstacle to a nice build dashboard.
io/ioutil/ioutil_test.go does test that error simply by calling ioutil.ReadFile() function with a non-existing file.
That shouldn't require any setup.
filename := "rumpelstilzchen"
contents, err := ReadFile(filename)
if err == nil {
t.Fatalf("ReadFile %s: error expected, none found", filename)
}
I was trying to throw errors in my Golang program with log.Fatal but, log.Fatal does not also print the line where the log.Fatal was ran. Is there no way of getting access to the line number that called log.Fatal? i.e. is there a way to get the line number when throwing an error?
I was trying to google this but was unsure how. The best thing I could get was printing the stack trace, which I guess is good but might be a little too much. I also don't want to write debug.PrintStack() every time I need the line number, I am just surprised there isn't any built in function for this like log.FatalStackTrace() or something that isn't costume.
Also, the reason I do not want to make my own debugging/error handling stuff is because I don't want people to have to learn how to use my special costume handling code. I just want something standard where people can read my code later and be like
"ah ok, so its throwing an error and doing X..."
The less people have to learn about my code the better :)
You can set the Flags on either a custom Logger, or the default to include Llongfile or Lshortfile
// to change the flags on the default logger
log.SetFlags(log.LstdFlags | log.Lshortfile)
Short version, there's nothing directly built in, however you can implement it with a minimal learning curve using runtime.Caller
func HandleError(err error) (b bool) {
if err != nil {
// notice that we're using 1, so it will actually log where
// the error happened, 0 = this function, we don't want that.
_, filename, line, _ := runtime.Caller(1)
log.Printf("[error] %s:%d %v", filename, line, err)
b = true
}
return
}
//this logs the function name as well.
func FancyHandleError(err error) (b bool) {
if err != nil {
// notice that we're using 1, so it will actually log the where
// the error happened, 0 = this function, we don't want that.
pc, filename, line, _ := runtime.Caller(1)
log.Printf("[error] in %s[%s:%d] %v", runtime.FuncForPC(pc).Name(), filename, line, err)
b = true
}
return
}
func main() {
if FancyHandleError(fmt.Errorf("it's the end of the world")) {
log.Print("stuff")
}
}
playground
If you need exactly a stack trace, take a look at https://github.com/ztrue/tracerr
I created this package in order to have both stack trace and source fragments to be able to debug faster and log errors with much more details.
Here is a code example:
package main
import (
"io/ioutil"
"github.com/ztrue/tracerr"
)
func main() {
if err := read(); err != nil {
tracerr.PrintSourceColor(err)
}
}
func read() error {
return readNonExistent()
}
func readNonExistent() error {
_, err := ioutil.ReadFile("/tmp/non_existent_file")
// Add stack trace to existing error, no matter if it's nil.
return tracerr.Wrap(err)
}
And here is the output: