I am building a performance oriented REST-API.
The skeleton was built with go-swagger.
The API has 3ms to answer and is succeeding in single-use while only needing between 0.5ms - 0.8ms for the response. There are two calls made to redis.
This is how the pool is initiated:
func createPool(server string) *redis.Pool {
return &redis.Pool{
MaxIdle: 500,
MaxActive: 10000,
IdleTimeout: 5 * time.Second,
//MaxConnLifetime: 1800 * time.Microsecond,
Dial: func() (redis.Conn, error) {
c, err := redis.Dial("tcp", server)
if err != nil {
return nil, err
}
return c, err
},
TestOnBorrow: func(c redis.Conn, t time.Time) error {
if time.Since(t) < (3 * time.Second) {
return nil
}
_, err := c.Do("PING")
if err != nil {
}
return err
},
}
And this is the only place where the pool is used:
func GetValue(params Params) []int64 {
timeNow := time.Now()
conn := data.Pool.Get()
value1 := Foo(conn)
value2 := Bar(value1 , conn)
conn.Close()
defer Log(value1, value2)
return value2}
So basically at the start I get a connection from the pool, use is for the two redis-requests and then close it. I previously used defer conn.Close() as it is stated in the documentation and it didn't work either. vm.overcommit_memory=1 and net.core.somaxconn=512 were set on the server.
In single use of the API there is no problem.
When under stress, like 4000 requests per second, it works for the first like 10s and then gets very slow and doesn't manage to answer in time (the 3ms stated at start).
When I check ActiveCount and IdleCount the values are between 2 and 5 and are always the same. Should`t there be a higher amount of connections possible with a MaxActive-value of 10.000? Or am I missing some crucial settings?
The whole problem was not redis-dependent. The sockets of the port listening were flooded, since the TCP-connections weren't closed properly while stress-testing.
That resulted in around 60k connections in time_wait-state. The problem was resolved when using live-traffic for the stress-test instead of jMeter.
Related
I am a beginner and am currently playing with the pubsub example from libp2p given here https://github.com/libp2p/go-libp2p/tree/master/examples/pubsub/basic-chat-with-rendezvous
I have been able to build the code and run the binary in different terminals and it works.
I am trying to automate this process from the main.go program itself where I can create a few threads to spin up new agents where they
publish messages to the network and the rest of the peers subscribe to it.
I have provided the modified code I have built currently but it doesnt seem to work. The peers cannot discover each other.
func main() {
help := flag.Bool("help", false, "Display Help")
cfg := parseFlags()
if *help {
fmt.Printf("Simple example for peer discovery using mDNS. mDNS is great when you have multiple peers in local LAN.")
fmt.Printf("Usage: \n Run './chat-with-mdns'\nor Run './chat-with-mdns -host [host] -port [port] -rendezvous [string] -pid [proto ID]'\n")
os.Exit(0)
}
fmt.Printf("[*] Listening on: %s with port: %d\n", cfg.listenHost, cfg.listenPort)
var wg sync.WaitGroup
for i := 0; i < 5; i++ {
// Spawn a thread for each iteration in the loop.
// Pass 'i' into the goroutine's function
// in order to make sure each goroutine
// uses a different value for 'i'.
wg.Add(5)
go func(i int) {
// At the end of the goroutine, tell the WaitGroup
// that another thread has completed.
defer wg.Done()
ctx := context.Background()
r := rand.Reader
// Creates a new RSA key pair for this host.
prvKey, _, err := crypto.GenerateKeyPairWithReader(crypto.RSA, 2048, r)
if err != nil {
panic(err)
}
// 0.0.0.0 will listen on any interface device.
sourceMultiAddr, _ := multiaddr.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d", cfg.listenHost, cfg.listenPort))
// libp2p.New constructs a new libp2p Host.
// Other options can be added here.
host, err := libp2p.New(
libp2p.ListenAddrs(sourceMultiAddr),
libp2p.Identity(prvKey),
)
if err != nil {
panic(err)
}
// Set a function as stream handler.
// This function is called when a peer initiates a connection and starts a stream with this peer.
host.SetStreamHandler(protocol.ID(cfg.ProtocolID), handleStream)
fmt.Printf("\n[*] Your Multiaddress Is: /ip4/%s/tcp/%v/p2p/%s\n", cfg.listenHost, cfg.listenPort, host.ID().Pretty())
peerChan := initMDNS(host, cfg.RendezvousString)
for { // allows multiple peers to join
peer := <-peerChan // will block untill we discover a peer // the code currently hangs here
fmt.Println("Found peer:", peer, ", connecting")
if err := host.Connect(ctx, peer); err != nil {
fmt.Println("Connection failed:", err)
continue
}
//** this part of the code is experimental and is not accessed by any thread yet **//
stream, err := host.NewStream(ctx, peer.ID, protocol.ID(cfg.ProtocolID))
if err != nil {
fmt.Println("Stream open failed", err)
} else {
rw := bufio.NewReadWriter(bufio.NewReader(stream), bufio.NewWriter(stream))
go writeData(rw)
go readData(rw)
fmt.Println("Connected to:", peer)
}
//** this part of the code is experimental and is not accessed by any thread yet **//
}
}(i)
}
fmt.Println("exit")
wg.Wait()
fmt.Println("Finished for loop")
}
But this doesn't seem to work. Are there any examples I can look at currently for solving this error.
I have some SQL queries that do not change on every request (only it's parameter). So, instead of doing this for each request:
func HandleRequest() {
rows, err := db.Query(sqlQuery, params...)
// do something with data
}
Is it okay if for each reqest I do this instead:
// together with server initialization
stmt, err := db.Prepare(sqlQuery)
func HandleRequest() {
rows, err := stmt.Query(params...)
// do something with data
}
As the documentation of DB.Prepare() states:
Multiple queries or executions may be run concurrently from the returned statement.
It is safe for concurrent use, although the intended use for prepared statements is not to share them between multiple requests. The main reason is that a prepared statement (may) allocate resources in the DB server itself, and it's not freed until you call the Close() method of the returned statement. So I'd advise against it.
The typical use case is if you have to run the same statement multiple times with different parameters, such as the example in the documentation:
projects := []struct {
mascot string
release int
}{
{"tux", 1991},
{"duke", 1996},
{"gopher", 2009},
{"moby dock", 2013},
}
stmt, err := db.Prepare("INSERT INTO projects(id, mascot, release, category) VALUES( ?, ?, ?, ? )")
if err != nil {
log.Fatal(err)
}
defer stmt.Close() // Prepared statements take up server resources and should be closed after use.
for id, project := range projects {
if _, err := stmt.Exec(id+1, project.mascot, project.release, "open source"); err != nil {
log.Fatal(err)
}
}
I've run into a problem where I create a new connection each request and it is terribly inefficient.
I would like to allow a set maximum number of TLS connections to stay open/cached on my client at once. When data is ready to be transmitted it first checks if there is an idle connection, then it checks if it can create a new connection (i.e. number of open connections < maximum allowed). If both are false then it has to wait until either a connection becomes idle or the number of open connections decreases. Connections should also be killed after they have been idle for a set amount of time.
Here is some (bad) pseudocode of what I have in mind. Could I have some suggestions?
func newTLSConnection(netDialer, host, tlsConfig) (tls.Con) {
// Set up the certs
// ...
// Make A TLS Connection
con, _ := tls.DialWithDialer(netDialer, "tcp", host, tlsConfig)
return con
}
func (con tls.Con) Do (someData []byte) {
// If con
// Send some date to the server
_, _ := con.Write(someData)
// Get response from server
response := make([]byte, 100)
_, _ := io.ReadFull(con, response)
return
}
main(){
var cons []tls.Con
maxConSize := 3
while {
if allConsInSliceAreBusy() && len(cons) < maxConSize{
newCon = NewTLSConnection(...)
cons = append(cons, newCon)
conToUse := newCon
conToUse.Do([]byte("stuff"))
} else if !allConsInSliceAreBusy() {
conToUse := firstOpenConInCons()
conToUse.Do([]byte("stuff"))
} else{
// NOP. Max cons created and they are all busy.
// Wait for one to become idle or close.
}
}
}
Thank you!
What you ask about is called connection pooling. Have a look at Fasthttp package source code: https://github.com/valyala/fasthttp/blob/master/client.go. You can even use this library or another one for your purpose.
You can find the acquireConn func which does exactly what you need:
Locks connection pool to allow concurrent execution.
Creates new connections if pool is empty (new connections are pulled out of the pool).
Makes connections cleanup in case of TTL timeout.
I am really new to Golang and I have a question regarding to testing.
I had a test where I wanted to check whether the persisting of a customer in elasticsearch works or not. I've reduced the code to the critical part and posted it on github: (https://github.com/fvosberg/elastic-go-testing)
The problem is, that I have to wait for elasticsearch to index the new document, before I can search for it. Is there another option than waiting a second for this to happen? This feels very ugly, but I don't know how I can test the integration (working with elasticsearch with lowercasing the email address ...) in another way.
Are there solutions for this problem?
package main
import (
"github.com/fvosberg/elastic-go-testing/customer"
"testing"
"time"
)
func TestRegistration(t *testing.T) {
testCustomer := customer.Customer{Email: "testing#test.de"}
testCustomer.Create()
time.Sleep(time.Second * 1)
_, err := customer.FindByEmail("testing#test.de")
if err != nil {
t.Logf("Error occured: %+v\n", err)
t.Fail()
} else {
t.Log("Found customer testing#test.de")
}
}
Elasticsearch has a flush command that is useful for this situation. Since you're using the elastic project as an interface, you can use the following (where client is your ES client):
...
testCustomer.Create()
res, err := client.Flush().Do()
if err != nil {
t.Fatal(err)
}
_, err := customer.FindByEmail("testing#test.de")
...
We are trying to test locks. Basically, there are multiple clients trying to obtain a lock on a particular key. In the example below, we used the key "x".
I don't know how to test whether the locking is working. I can only read the logs to determine whether it is working.
The correct sequence of events should be:
client1 obtains lock on key "x"
client2 tries to obtain lock on key "x" (fmt.Println("2 getting lock")) - but is blocked and waits
client1 releases lock on key "x"
client2 obtains lock on key "x"
Q1: How could I automate the process and turn this into a test?
Q2: What are some of the tips to testing concurrency / mutex locking in general?
func TestLockUnlock(t *testing.T) {
client1, err := NewClient()
if err != nil {
t.Error("Unexpected new client error: ", err)
}
fmt.Println("1 getting lock")
id1, err := client1.Lock("x", 10*time.Second)
if err != nil {
t.Error("Unexpected lock error: ", err)
}
fmt.Println("1 got lock")
go func() {
client2, err := NewClient()
if err != nil {
t.Error("Unexpected new client error: ", err)
}
fmt.Println("2 getting lock")
id2, err := client2.Lock("x", 10*time.Second)
if err != nil {
t.Error("Unexpected lock error: ", err)
}
fmt.Println("2 got lock")
fmt.Println("2 releasing lock")
err = client2.Unlock("x", id2)
if err != nil {
t.Error("Unexpected Unlock error: ", err)
}
fmt.Println("2 released lock")
err = client2.Close()
if err != nil {
t.Error("Unexpected connection close error: ", err)
}
}()
fmt.Println("sleeping")
time.Sleep(2 * time.Second)
fmt.Println("finished sleeping")
fmt.Println("1 releasing lock")
err = client1.Unlock("x", id1)
if err != nil {
t.Error("Unexpected Unlock error: ", err)
}
fmt.Println("1 released lock")
err = client1.Close()
if err != nil {
t.Error("Unexpected connection close error: ", err)
}
time.Sleep(5 * time.Second)
}
func NewClient() *Client {
....
}
func (c *Client) Lock(lockKey string, timeout time.Duration) (lockId int64, err error){
....
}
func (c *Client) Unlock(lockKey string) err error {
....
}
Concurrency testing of lock-based code is hard, to the extent that provable-correct solutions are difficult to come by. Ad-hoc manual testing via print statements is not ideal.
There are four dynamic concurrency problems that are essentially untestable (more). Along with the testing of performance, a statistical approach is the best you can achieve via test code (e.g. establishing that the 90 percentile performance is better than 10ms or that deadlock is less than 1% likely).
This is one of the reasons that the Communicating Sequential Process (CSP) approach provided by Go is better to use than locks on share memory. Consider that your Goroutine under test provides a unit with specified behaviour. This can be tested against other Goroutines that provide the necessary test inputs via channels and monitor result outputs via channels.
With CSP, using Goroutines without any shared memory (and without any inadvertently shared memory via pointers) will guarantee that race conditions don't occur in any data accesses. Using certain proven design patterns (e.g. by Welch, Justo and WIllcock) can establish that there won't be deadlock between Goroutines. It then remains to establish that the functional behaviour is correct, for which the Goroutine test-harness mentioned above will do nicely.