golang restarted parent process doesn't receive SIGINT - process

I'm writing a little program to manage restarts to other processes.
Basically when an app process starts (call it A), it spawns a new process (call it D), which has a simple HTTP server. When D receives an http request, it kills A and restarts it.
Problem is, A now doesn't respond to CTRL-C, and I'm not sure why. It may be something simple or maybe I don't really understand the relationship between processes, the terminal, and signals. But it's running in the same terminal with the same stdin/stdout/stderr. Below is a full program demonstrating this behaviour.
package main
import (
"flag"
"log"
"net/http"
"os"
"os/exec"
"strconv"
"time"
)
/*
Running this program starts an app (repeatdly prints 'hi') and spawns a new process running a simple HTTP server
When the server receives a request, it kills the other process and restarts it.
All three processes use the same stdin/stdout/stderr.
The restarted process does not respond to CTRL-C :(
*/
var serv = flag.Bool("serv", false, "run server")
// run the app or run the server
func main() {
flag.Parse()
if *serv {
runServer()
} else {
runApp()
}
}
// handle request to server
// url should contain pid of process to restart
func handler(w http.ResponseWriter, r *http.Request) {
pid, err := strconv.Atoi(r.URL.Path[1:])
if err != nil {
log.Println("send a number...")
}
// find the process
proc, err := os.FindProcess(pid)
if err != nil {
log.Println("can't find proc", pid)
return
}
// terminate the process
log.Println("Terminating the process...")
err = proc.Signal(os.Interrupt)
if err != nil {
log.Println("failed to signal interupt")
return
}
// restart the process
cmd := exec.Command("restarter")
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Start(); err != nil {
log.Println("Failed to restart app")
return
}
log.Println("Process restarted")
}
// run the server.
// this will only work the first time and that's fine
func runServer() {
http.HandleFunc("/", handler)
if err := http.ListenAndServe(":9999", nil); err != nil {
log.Println(err)
}
}
// the app prints 'hi' in a loop
// but first it spawns a child process which runs the server
func runApp() {
cmd := exec.Command("restarter", "-serv")
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Start(); err != nil {
log.Println(err)
}
log.Println("This is my process. It goes like this")
log.Println("PID:", os.Getpid())
for {
time.Sleep(time.Second)
log.Println("hi again")
}
}
The program expects to be installed. For convenience you can fetch it with go get github.com/ebuchman/restarter.
Run the program with restarter. It should print its process id. Then curl http://localhost:9999/<procid> to initiate the restart. The new process will now not respond to CTRL-C. Why? What am I missing?

This doesn't really have anything to do with Go. You start process A from your terminal shell. Process A starts process D (not sure what happened to B, but never mind). Process D kills process A. Now your shell sees that the process it started has exited, so the shell prepares to listen to another command. Process D starts another copy of process A, but the shell doesn't know anything about it. When you type ^C, the shell will handle it. If you run another program, the shell will arrange so that ^C goes to that program. The shell knows nothing about your copy of process A, so it's never going to direct a ^C to that process.

You can check out the approach taken by two http server frameworks in order to listen and intercept signals (including SIGINT, or even SIGTERM)
kornel661/nserv, where the ZeroDowntime-example/server.go uses a channel:
// catch signals:
signals := make(chan os.Signal)
signal.Notify(signals, os.Interrupt, os.Kill)
zenazn/goji, where graceful/signal.go uses a similar approach:
var stdSignals = []os.Signal{syscall.SIGINT, syscall.SIGTERM}
var sigchan = make(chan os.Signal, 1)
func init() {
go waitForSignal()
}

Related

How to make an api call faster in Golang?

I am trying to upload bunch of files using the company's api to the storage service they provide. (basically to my account). I have got lots of files like 40-50 or something.
I got the full path of the files and utilize the os.Open, so that, I can pass the io.Reader. I did try to use client.Files.Upload() without goroutines but it took so much time to upload them and decided to use goroutines. Here the implementation that I tried. When I run the program it just uploads one file which is the one that has the lowest size or something that it waits for a long time. What is wrong with it? Is it not like every time for loops run it creates a goroutine continue its cycle and creates for every file? How to make it as fast as possible with goroutines?
var filePaths []string
var wg sync.WaitGroup
// fills the string of slice with fullpath of files.
func fill() {
filepath.Walk(rootpath, func(path string, info os.FileInfo, err error) error {
if !info.IsDir() {
filePaths = append(filePaths, path)
}
if err != nil {
fmt.Println("ERROR:", err)
}
return nil
})
}
func main() {
fill()
tokenSource := oauth2.StaticTokenSource(&oauth2.Token{AccessToken: token})
oauthClient := oauth2.NewClient(context.TODO(), tokenSource)
client := putio.NewClient(oauthClient)
for _, path := range filePaths {
wg.Add(1)
go func() {
defer wg.Done()
f, err := os.Open(path)
if err != nil {
log.Println("err:OPEN", err)
}
upload, err := client.Files.Upload(context.TODO(), f, path, 0)
if err != nil {
log.Println("error uploading file:", err)
}
fmt.Println(upload)
}()
}
wg.Wait()
}
Consider a worker pool pattern like this: https://go.dev/play/p/p6SErj3L6Yc
In this example application, I've taken out the API call and just list the file names. That makes it work on the playground.
A fixed number of worker goroutines are started. We'll use a channel to distribute their work and we'll close the channel to communicate the end of the work. This number could be 1 or 1000 routines, or more. The number should be chosen based on how many concurrent API operations your putio API can reasonably be expected to support.
paths is a chan string we'll use for this purpose.
workers range over paths channel to receive new file paths to upload
package main
import (
"fmt"
"os"
"path/filepath"
"sync"
)
func main() {
paths := make(chan string)
var wg = new(sync.WaitGroup)
for i := 0; i < 10; i++ {
wg.Add(1)
go worker(paths, wg)
}
if err := filepath.Walk("/usr", func(path string, info os.FileInfo, err error) error {
if err != nil {
return fmt.Errorf("Failed to walk directory: %T %w", err, err)
}
if info.IsDir() {
return nil
}
paths <- path
return nil
}); err != nil {
panic(fmt.Errorf("failed Walk: %w", err))
}
close(paths)
wg.Wait()
}
func worker(paths <-chan string, wg *sync.WaitGroup) {
defer wg.Done()
for path := range paths {
// do upload.
fmt.Println(path)
}
}
This pattern can handle an indefinitely large amount of files without having to load the entire list in memory before processing it. As you can see, this doesn't make the code more complicated - actually, it's simpler.
When I run the program it just uploads one file which is the one
Function literals inherit the scope in which they were defined. This is why our code only listed one path - the path variable scope in the for loop was shared to each go routine, so when that variable changed, all routines picked up the change.
Avoid function literals unless you actually want to inherit scope. Functions defined at the global scope don't inherit any scope, and you must pass all relevant variables to those functions instead. This is a good thing - it makes the functions more straightforward to understand and makes variable "ownership" transitions more explicit.
An appropriate case to use a function literal could be for the os.Walk parameter; its arguments are defined by os.Walk so definition scope is one way to access other values - such as paths channel, in our case.
Speaking of scope, global variables should be avoided unless their scope of usage is truly global. Prefer passing variables between functions to sharing global variables. Again, this makes variable ownership explicit and makes it easy to understand which functions do and don't access which variables. Neither your wait group nor your filePaths have any cause to be global.
f, err := os.Open(path)
Don't forget to close any files you open. When you're dealing with 40 or 50 files, letting all those open file handles pile up until the program ends isn't so bad, but it's a time bomb in your program that will go off when the number of files exceeds the ulimit of allowed open files. Because the function execution greatly exceeds the part where the file needs to be open, defer doesn't make sense in this case. I would use an explicit f.Close() after uploading the file.

Golang - API Server and Socket at the same time

I try to make sockets to communicate with my clients.
A socket would be created after some requests to my API. It means, a client connects itself (only by request), but then, he joins a chat, so a socket is created and linked to the good channel.
I already used sockets so I understand how it works (C, C++, C#, Java), but what I want to make, with what I saw on the web, I think it's possible, but I don't understand how to handle it with golang.
I create a first server:
func main() {
r := mux.NewRouter()
r.HandleFunc("/", HomeHandler)
r.HandleFunc("/products", ProductsHandler)
r.HandleFunc("/articles", ArticlesHandler)
http.Handle("/", r)
}
But for socket, I need another one?
package main
import "net"
import "fmt"
import "bufio"
import "strings" // only needed below for sample processing
func main() {
fmt.Println("Launching server...")
// listen on all interfaces
ln, _ := net.Listen("tcp", ":8081")
// accept connection on port
conn, _ := ln.Accept()
// run loop forever (or until ctrl-c)
for {
// will listen for message to process ending in newline (\n)
message, _ := bufio.NewReader(conn).ReadString('\n')
// output message received
fmt.Print("Message Received:", string(message))
// sample process for string received
newmessage := strings.ToUpper(message)
// send new string back to client
conn.Write([]byte(newmessage + "\n"))
}
}
Thank for help !
Based on our chat discussion.
OVERsimplified example with lots of pseudocode
import (
"net"
"encoding/json"
"errors"
)
type User struct {
name string
}
type Message {
Action string
Params map[string]string
}
type Server struct {
connected_users map[*User]net.Conn
users_connected_with_each_other map[*User][]*User
good_users map[string]*User
}
func (srv *Server) ListenAndServe(addr string) error {
ln, err := net.Listen("tcp", addr)
if err != nil {
return err
}
return srv.Serve(tcpKeepAliveListener{ln.(*net.TCPListener)})
}
func (srv *Server) Serve(l net.Listener) error {
defer l.Close()
for {
rw, e := l.Accept()
if e != nil {
return e
}
// you want to create server_conn here with buffers, channels and stuff
// to use async thread safe read/write from it
go srv.serve_conn(rw)
}
}
func (srv *Server) serve_conn(rw net.Conn) error {
dec := json.NewDecoder(rw)
var message Message
//read 1st message he sent, should be token to connect
dec.Decode(&message)
token := get_token(Message)
user, ok := srv.good_users[token]
if !ok {
return errors.New("BAD USER!")
}
// store connected user
srv.connected_users[user] = rw
for {
// async reader will be nice
dec.Decode(&message)
switch message.Action {
case "Message":
// find users to send message to
if chats_with, err := users_connected_with_each_other[user]; err == nil {
for user_to_send_message_to := range chats_with {
// find connections to send message to
if conn, err := srv.connected_users[user_to_send_message_to]; err == nil {
// send json encoded message
err := json.NewEncoder(conn).Encode(message)
//if write failed store message for later
}
}
}
//other cases
default:
// log?
}
}
}
func main() {
known_users_with_tokens := make(map[string]*User)
srv := &Server{
connected_users: make(map[*User]net.Conn),
users_connected_with_each_other: make(map[*User][]*User),
good_users: known_users_with_tokens, // map is reference type, so treat it like pointer
}
// start our server
go srv.ListenAndServe(":54321")
ConnRequestHandler := function(w http.ResponseWriter, r *http.Request) {
user := create_user_based_on_request(r)
token := create_token(user)
// now user will be able to connect to server with token
known_users_with_tokens[token] = user
}
ConnectUsersHandler := function(user1,user2) {
// you should guard your srv.* members to avoid concurrent read/writes to map
srv.users_connected_with_each_other[user1] = append(srv.users_connected_with_each_other[user1], user2)
srv.users_connected_with_each_other[user2] = append(srv.users_connected_with_each_other[user2], user1)
}
//initialize your API http.Server
r := mux.NewRouter()
r.HandleFunc("/", HomeHandler)
r.HandleFunc("/products", ProductsHandler)
r.HandleFunc("/articles", ArticlesHandler)
r.HandleFunc("/connection_request", ConnRequestHandler) // added
http.Handle("/", r)
}
Call ConnectUsersHandler(user1, user2) to allow them communicate with each other.
known_users_with_tokens[token] = user to allow user to connect to server
You need to implement async reader/writer for connections to your server. Usefull structs to keep good Users.
Guard Server struct members and provide thread safe access to update it.
UDP
Looks like json.NewEncoder(connection).Encode(&message) and json.NewDecoder(connection).Decode(&message) is async and thread safe. So you can write simultaneously from different goroutines. No need to manual sync, YAY!
default http server accepts connection on one "host:port" only
Answer depends on what protocol you are going to use to communicate via your sockets.
I suggest to do it this way: (much simplified)
Leave http.Server alone to serve your API (it implements protocols HTTP 1.*/2 so you dont need to worry about it)
Implement your own "MultiSocketServer", do to so:
2.1 Implement GracefulListener (must implement net.Listener) (you need to shutdown your sockets when you dont need them anymore, right?)
2.2 Implement MultiSocketServer.Serve(l GracefulListener) (hello http.Server.Serve() ) to serve individual connection (your protocol to communicate with client via sockets goes here. something like net.textproto will be easy to implement since you GracefulListener.Accept() will return net.Conn)
2.3 Add methods MultiSocketServer.ListenAndServe(addr), MultiSocketServer.StopServe(l GracefulListener) to your MultiSocketServer
type MultiSocketServer struct {
listeners GracefulListener[] //or map?
// lots of other stuff
}
// looks familiar? (http.Server.ListenAndServe)
func (s *MultiSocketServer) ListenAndServe(addr string) {
ln, err := net.Listen("tcp", addr)
graceful_listner = &GracefulListener(ln)
s.listeners = append(s.listeners, graceful_listner)
go s.Serve(graceful_listner)
return graceful_listner
}
func (s *MultiSocketServer) StopServe(graceful_listner GracefulListener) {
graceful_listner.Stop()
//pseudocode
remove_listener_from_slice(s.listeners, graceful_listner)
}
Ofcourse, you need to add error checking and mutex (propably) to guard MultiSocketServer.listeners to make it thread safe.
In your main() start your API http.Server, and initialize your MultiSocketServer. Now from your http.Handler/http.HandlerFunc of http.Server you should be able to call MultiSocketServer.ListenAndServe(addr) to listen and serve your sockets connections.
UPDATE based on question
however, I'm not sure to understand the part "In your main()". If I understand it good, you said I have my API, and after starting it, I initialize MultiSocketServer. But where? after the starting of my API? Or you mean it would be better that I use the logic of your code as an API? Every request trough a socket
BTW: updated MultiSocketServer.ListenAndServe to start Listen and return graceful_listner
func main() {
//init MultiSocketServer
multi_socket_server = &MultiSocketServer{} //nil for GracefulListener[] is fine for now, complex initialization will be added later
// no listners yet, serves nothing
// create new Handeler for your "socket requests"
SocketRequestHandler := function(w http.ResponseWriter, r *http.Request) {
// identify client, assign him an address to connect
addr_to_listen := parse_request(r) //pseudocode
listener := multi_socket_server.ListenAndServe(addr_to_listen)
// TODO: handle errors
// now your multi_socket_server listen to addr_to_listen and serves it with multi_socket_server.Serve method in its own goroutine
// as i said MultiSocketServer.Serve method must implement your protocol (plaintext Reader/Writer on listener for now?)
save_listener_in_context_or_whatever_you_like_to_track_it(listener) //pseudo
}
SocketDisconnectHandler := function(w http.ResponseWriter, r *http.Request) {
// identify client
some_client := parse_request(r) //pseudocode
// get listener based on info
listener := get_listener_from_context_or_whatever(some_client) //pseudo
multi_socket_server.StopServe(listener)
// TODO: handle errors
}
//initialize your API http.Server
r := mux.NewRouter()
r.HandleFunc("/", HomeHandler)
r.HandleFunc("/products", ProductsHandler)
r.HandleFunc("/articles", ArticlesHandler)
r.HandleFunc("/socket_request", SocketRequestHandler) // added
r.HandleFunc("/socket_disconnect", SocketDisconnectHandler) //added
http.Handle("/", r)
// it creates new http.Server with DefaultServeMux as Handler (which is configured with your http.Handle("/", r) call)
http.ListenAndServe(":8080") // start serving API via HTTP proto
}
Actually, you may call multi_socket_server.ListenAndServe(addr_to_listen) and multi_socket_server.StopServe(listener) from any handler in your API server.
Every time you call multi_socket_server.ListenAndServe(addr_to_listen) it will create new listener and serve on it, you have to control it (dont listen on the same address more then once (i think it will error out anyway))
Your MultiSocketServer.Serve may looks like:
func (s *MultiSocketServer) Serve(l net.Listener) {
defer l.Close()
for {
// will listen for message to process ending in newline (\n)
message, _ := bufio.NewReader(conn).ReadString('\n')
// output message received
fmt.Print("Message Received:", string(message))
// sample process for string received
newmessage := strings.ToUpper(message)
// send new string back to client
conn.Write([]byte(newmessage + "\n"))
}
}
Possible GracefulListener implementation github
Or are you trying to achieve something completely different? =)

conditionally running tests with build flags not working

I'm running some tests in golang and I want to avoid running the slow ones, for example this one uses bcrypt so it's slow:
// +build slow
package services
import (
"testing"
"testing/quick"
)
// using bcrypt takes too much time, reduce the number of iterations.
var config = &quick.Config{MaxCount: 20}
func TestSignaturesAreSame(t *testing.T) {
same := func(simple string) bool {
result, err := Encrypt(simple)
success := err == nil && ComparePassWithHash(simple, result)
return success
}
if err := quick.Check(same, config); err != nil {
t.Error(err)
}
}
To avoid running this in every iteration I've set up the // +build slow flag. This should only run when doing go test -tags slow but unfortunately it's running every time (the -v flag shows it's running).
Any idea what's wrong?
Your // +build slow needs to be followed by a blank line
To distinguish build constraints from package documentation, a series of build constraints must be followed by a blank line.
visit Build Constraints

How do you get a Golang program to print the line number of the error it just called?

I was trying to throw errors in my Golang program with log.Fatal but, log.Fatal does not also print the line where the log.Fatal was ran. Is there no way of getting access to the line number that called log.Fatal? i.e. is there a way to get the line number when throwing an error?
I was trying to google this but was unsure how. The best thing I could get was printing the stack trace, which I guess is good but might be a little too much. I also don't want to write debug.PrintStack() every time I need the line number, I am just surprised there isn't any built in function for this like log.FatalStackTrace() or something that isn't costume.
Also, the reason I do not want to make my own debugging/error handling stuff is because I don't want people to have to learn how to use my special costume handling code. I just want something standard where people can read my code later and be like
"ah ok, so its throwing an error and doing X..."
The less people have to learn about my code the better :)
You can set the Flags on either a custom Logger, or the default to include Llongfile or Lshortfile
// to change the flags on the default logger
log.SetFlags(log.LstdFlags | log.Lshortfile)
Short version, there's nothing directly built in, however you can implement it with a minimal learning curve using runtime.Caller
func HandleError(err error) (b bool) {
if err != nil {
// notice that we're using 1, so it will actually log where
// the error happened, 0 = this function, we don't want that.
_, filename, line, _ := runtime.Caller(1)
log.Printf("[error] %s:%d %v", filename, line, err)
b = true
}
return
}
//this logs the function name as well.
func FancyHandleError(err error) (b bool) {
if err != nil {
// notice that we're using 1, so it will actually log the where
// the error happened, 0 = this function, we don't want that.
pc, filename, line, _ := runtime.Caller(1)
log.Printf("[error] in %s[%s:%d] %v", runtime.FuncForPC(pc).Name(), filename, line, err)
b = true
}
return
}
func main() {
if FancyHandleError(fmt.Errorf("it's the end of the world")) {
log.Print("stuff")
}
}
playground
If you need exactly a stack trace, take a look at https://github.com/ztrue/tracerr
I created this package in order to have both stack trace and source fragments to be able to debug faster and log errors with much more details.
Here is a code example:
package main
import (
"io/ioutil"
"github.com/ztrue/tracerr"
)
func main() {
if err := read(); err != nil {
tracerr.PrintSourceColor(err)
}
}
func read() error {
return readNonExistent()
}
func readNonExistent() error {
_, err := ioutil.ReadFile("/tmp/non_existent_file")
// Add stack trace to existing error, no matter if it's nil.
return tracerr.Wrap(err)
}
And here is the output:

Exit with error code in go?

What's the idiomatic way to exit a program with some error code?
The documentation for Exit says "The program terminates immediately; deferred functions are not run.", and log.Fatal just calls Exit. For things that aren't heinous errors, terminating the program without running deferred functions seems extreme.
Am I supposed to pass around some state that indicate that there's been an error, and then call Exit(1) at some point where I know that I can exit safely, with all deferred functions having been run?
I do something along these lines in most of my real main packages, so that the return err convention is adopted as soon as possible, and has a proper termination:
func main() {
if err := run(); err != nil {
fmt.Fprintf(os.Stderr, "error: %v\n", err)
os.Exit(1)
}
}
func run() error {
err := something()
if err != nil {
return err
}
// etc
}
In Python I commonly use a pattern, which being converted to Go looks like this:
func run() int {
// here goes
// the code
return 1
}
func main() {
os.Exit(run())
}
I think the most clear way to do it is to set the exitCode at the top of main, then defer closing as the next step. That lets you change exitCode anywhere in main, and it's last value will be exited with:
package main
import (
"fmt"
"os"
)
func main() {
exitCode := 0
defer func() { os.Exit(exitCode) }()
// Do whatever, including deferring more functions
defer func() {
fmt.Printf("Do some cleanup\n")
}()
func() {
fmt.Printf("Do some work\n")
}()
// But let's say something went wrong
exitCode = 1
// Do even more work/cleanup if you want
// At the end, os.Exit will be called with the last value of exitCode
}
Output:
Do some work
Do some cleanup
Program exited: status 1.
Go Playgroundhttps://play.golang.org/p/AMUR4m_A9Dw
Note that an important disadvantage of this is that you don't exit the process as soon as you set the error code.
As mentioned by fas, you have func Exit(exitcode int) from the os package.
However, if you need the defered function to be applied, you always can use the defer keyword like this:
http://play.golang.org/p/U-hAS88Ug4
You perform all your operation, affect a error variable and at the very end, when everything is cleaned up, you can exit safely.
Otherwise, you could also use panic/recover:
http://play.golang.org/p/903e76GnQ-
When you have an error, you panic, end you cleanup where you catch (recover) it.
Yes, actually. The os package provides this.
package main
import "os"
func main() {
os.Exit(1)
}
http://golang.org/pkg/os/#Exit
Edit: so it looks like you know of Exit. This article gives an overview of Panic which will let deferred functions run before returning. Using this in conjunction with an exit may be what you're looking for. http://blog.golang.org/defer-panic-and-recover
Another good way I follow is:
if err != nil {
// log.Fatal will print the error message and will internally call System.exit(1) so the program will terminate
log.Fatal("fatal error message")
}