How to test concurrency and locking in golang? - testing

We are trying to test locks. Basically, there are multiple clients trying to obtain a lock on a particular key. In the example below, we used the key "x".
I don't know how to test whether the locking is working. I can only read the logs to determine whether it is working.
The correct sequence of events should be:
client1 obtains lock on key "x"
client2 tries to obtain lock on key "x" (fmt.Println("2 getting lock")) - but is blocked and waits
client1 releases lock on key "x"
client2 obtains lock on key "x"
Q1: How could I automate the process and turn this into a test?
Q2: What are some of the tips to testing concurrency / mutex locking in general?
func TestLockUnlock(t *testing.T) {
client1, err := NewClient()
if err != nil {
t.Error("Unexpected new client error: ", err)
}
fmt.Println("1 getting lock")
id1, err := client1.Lock("x", 10*time.Second)
if err != nil {
t.Error("Unexpected lock error: ", err)
}
fmt.Println("1 got lock")
go func() {
client2, err := NewClient()
if err != nil {
t.Error("Unexpected new client error: ", err)
}
fmt.Println("2 getting lock")
id2, err := client2.Lock("x", 10*time.Second)
if err != nil {
t.Error("Unexpected lock error: ", err)
}
fmt.Println("2 got lock")
fmt.Println("2 releasing lock")
err = client2.Unlock("x", id2)
if err != nil {
t.Error("Unexpected Unlock error: ", err)
}
fmt.Println("2 released lock")
err = client2.Close()
if err != nil {
t.Error("Unexpected connection close error: ", err)
}
}()
fmt.Println("sleeping")
time.Sleep(2 * time.Second)
fmt.Println("finished sleeping")
fmt.Println("1 releasing lock")
err = client1.Unlock("x", id1)
if err != nil {
t.Error("Unexpected Unlock error: ", err)
}
fmt.Println("1 released lock")
err = client1.Close()
if err != nil {
t.Error("Unexpected connection close error: ", err)
}
time.Sleep(5 * time.Second)
}
func NewClient() *Client {
....
}
func (c *Client) Lock(lockKey string, timeout time.Duration) (lockId int64, err error){
....
}
func (c *Client) Unlock(lockKey string) err error {
....
}

Concurrency testing of lock-based code is hard, to the extent that provable-correct solutions are difficult to come by. Ad-hoc manual testing via print statements is not ideal.
There are four dynamic concurrency problems that are essentially untestable (more). Along with the testing of performance, a statistical approach is the best you can achieve via test code (e.g. establishing that the 90 percentile performance is better than 10ms or that deadlock is less than 1% likely).
This is one of the reasons that the Communicating Sequential Process (CSP) approach provided by Go is better to use than locks on share memory. Consider that your Goroutine under test provides a unit with specified behaviour. This can be tested against other Goroutines that provide the necessary test inputs via channels and monitor result outputs via channels.
With CSP, using Goroutines without any shared memory (and without any inadvertently shared memory via pointers) will guarantee that race conditions don't occur in any data accesses. Using certain proven design patterns (e.g. by Welch, Justo and WIllcock) can establish that there won't be deadlock between Goroutines. It then remains to establish that the functional behaviour is correct, for which the Goroutine test-harness mentioned above will do nicely.

Related

Redigo connection pool - how to get more connections?

I am building a performance oriented REST-API.
The skeleton was built with go-swagger.
The API has 3ms to answer and is succeeding in single-use while only needing between 0.5ms - 0.8ms for the response. There are two calls made to redis.
This is how the pool is initiated:
func createPool(server string) *redis.Pool {
return &redis.Pool{
MaxIdle: 500,
MaxActive: 10000,
IdleTimeout: 5 * time.Second,
//MaxConnLifetime: 1800 * time.Microsecond,
Dial: func() (redis.Conn, error) {
c, err := redis.Dial("tcp", server)
if err != nil {
return nil, err
}
return c, err
},
TestOnBorrow: func(c redis.Conn, t time.Time) error {
if time.Since(t) < (3 * time.Second) {
return nil
}
_, err := c.Do("PING")
if err != nil {
}
return err
},
}
And this is the only place where the pool is used:
func GetValue(params Params) []int64 {
timeNow := time.Now()
conn := data.Pool.Get()
value1 := Foo(conn)
value2 := Bar(value1 , conn)
conn.Close()
defer Log(value1, value2)
return value2}
So basically at the start I get a connection from the pool, use is for the two redis-requests and then close it. I previously used defer conn.Close() as it is stated in the documentation and it didn't work either. vm.overcommit_memory=1 and net.core.somaxconn=512 were set on the server.
In single use of the API there is no problem.
When under stress, like 4000 requests per second, it works for the first like 10s and then gets very slow and doesn't manage to answer in time (the 3ms stated at start).
When I check ActiveCount and IdleCount the values are between 2 and 5 and are always the same. Should`t there be a higher amount of connections possible with a MaxActive-value of 10.000? Or am I missing some crucial settings?
The whole problem was not redis-dependent. The sockets of the port listening were flooded, since the TCP-connections weren't closed properly while stress-testing.
That resulted in around 60k connections in time_wait-state. The problem was resolved when using live-traffic for the stress-test instead of jMeter.

"Error op_response:0" with prepared statement

I'm using firebird database driver from "github.com/nakagami/firebirdsql" with GO1.11 + FB2.5
But I can't get prepared SELECT to work, it throws "Error op_response:0" error when executing the 2nd QUERYROW(). Any ideas?
Is there any alternative driver? Or am I using incorrect driver?
func test1(tx *sql.Tx) {
sqlStr := "SELECT number FROM order WHERE id=?"
stmt, err := tx.Prepare(sqlStr)
if err != nil {
panic(err.Error())
}
var value string
err = stmt.QueryRow(123).Scan(&value)
if err != nil {
panic(err.Error())
}
fmt.Println(value)
err = stmt.QueryRow(200).Scan(&value)
if err != nil {
panic(err.Error())
}
fmt.Println(value)
}
Result:
INV20183121
panic: Error op_response:0
goroutine 1 [running]:
main.test1(0xc00009c000, 0xc0000a8200)
I can venture a guess. Looking at github.com/nakagami/firebirdsql sources, this seems to be the only code path which can produce this error. Looking here, it ignores any network errors returned by recvPackets, which means: any thing on the network socket breaks, and you get this error back (because that's what recvPackets returns in case of network error).
I'd suggest rebuilding your code with debugPrint code uncommented, and see what is actually going on on the network connection.

What is an easy way to commit work half way through a transaction, but then continue to

Background
I am using the github.com/jmoiron/sqlx golang package with a Postgres database.
I have the following wrapper function to run SQL code in a transaction:
func (s *postgresStore) runInTransaction(ctx context.Context, fn func(*sqlx.Tx) error) error {
tx, err := s.db.Beginx()
if err != nil {
return err
}
defer func() {
if err != nil {
tx.Rollback()
return
}
err = tx.Commit()
}()
err = fn(tx)
return err
}
Given this, consider the following code:
func (s *store) SampleFunc(ctx context.Context) error {
err := s.runInTransaction(ctx,func(tx *sqlx.Tx) error {
// Point A: Do some database work
if err := tx.Commit(); err != nil {
return err
}
// Point B: Do some more database work, which may return an error
})
}
Desired behavior
If there is an error at Point A, then the transaction should have done zero work
If there is an error at Point B, then the transaction should still have completed the work at Point A.
Problem with current code
The code does not work as intended at the moment, because I am committing the transaction twice (once in runInTransaction, once in SampleFunc).
A Possible Solution
Where I commit the transaction, I could instead run something like tx.Exec("SAVEPOINT my_savepoint"), then defer tx.Exec("ROLLBACK TO SAVEPOINT my_savepoint")
After the code at Point B, I could run: tx.Exec("RELEASE SAVEPOINT my_savepoint")
So, if the code at Point B runs without error, I will fail to ROLLBACK to my savepoint.
Problems with Possible Solution
I'm not sure if using savepoints will mess with the database/sql package's behavior. Also, my solution seems a bit messy -- surely there is a cleaner way to do this!
Multiple transactions
You can split your work in two transactions:
func (s *store) SampleFunc(ctx context.Context) error {
err := s.runInTransaction(ctx,func(tx *sqlx.Tx) error {
// Point A: Do some database work
})
if err != nil {
return err
}
return s.runInTransaction(ctx,func(tx *sqlx.Tx) error {
// Point B: Do some more database work, which may return an error
})
}
I had the problem alike: I had a lots of steps in one transaction.
After starting transaction:
BEGIN
In loop:
SAVEPOINT s1
Some actions ....
If I get an error: ROLLBACK TO SAVEPOINT s1
If OK go to next step
Finally COMMIT
This approach gives me ability to perform all steps one-by-one. If some steps got failed I can throw away only them, keeping others. And finally commit all "good" work.

Effect of duplicate Redis subscription to same channel name

To subscribe to an instance of StackExchange.Redis.ISubscriber one needs to call the following API:
void Subscribe(RedisChannel channel, Action<RedisChannel, RedisValue> handler, CommandFlags flags = CommandFlags.None);
Question is, what happens if one calls this same line of code with the same channel name as a simple string, say "TestChannel"?
Does ISubscriber check for string equality or it just does not care and therefore we will have two subscriptions?
I am making an assumption that your question is targeted at the Redis API itself. Please let me know if it isn't.
The answer is also based on the assumption that you are using a single redis client connection.
The pubsub map is a hashtable.
To answer your question: If you subscribe multiple times with the same string, you will continue to have only one subscription(you can see that the subscribe happens based on the hashtable here: https://github.com/antirez/redis/blob/3.2.6/src/pubsub.c#L64.
Conversely, calling a single unsubscribe will unsubscribe your other subscriptions for that channel/pattern as well.
If it helps, here is a simple example in Go (I have used the go-redis library) that illustrates the unsubscribe and hashtable storage parts of the answer.
package main
import (
"fmt"
"log"
"time"
"github.com/go-redis/redis"
)
func main() {
cl := redis.NewClient((&redis.Options{
Addr: "127.0.0.1:6379",
PoolSize: 1,
}))
ps := cl.Subscribe()
err := ps.Subscribe("testchannel")
if err != nil {
log.Fatal(err)
}
err = ps.Subscribe("testchannel")
if err != nil {
log.Fatal(err)
}
err = ps.Unsubscribe("testchannel")
if err != nil {
log.Fatal(err)
}
go func() {
msg, err := ps.ReceiveMessage()
if err != nil {
log.Fatal(err)
}
fmt.Println(msg.Payload)
}()
err = cl.Publish("testchannel", "some value").Err()
if err != nil {
log.Fatal(err)
}
time.Sleep(10 * time.Second)
}
A channel may have multiple subscribers. All client who subscribe to the same channel will receive the messages published on this given channel.

Testing Elasticsearch in Golang without sleep

I am really new to Golang and I have a question regarding to testing.
I had a test where I wanted to check whether the persisting of a customer in elasticsearch works or not. I've reduced the code to the critical part and posted it on github: (https://github.com/fvosberg/elastic-go-testing)
The problem is, that I have to wait for elasticsearch to index the new document, before I can search for it. Is there another option than waiting a second for this to happen? This feels very ugly, but I don't know how I can test the integration (working with elasticsearch with lowercasing the email address ...) in another way.
Are there solutions for this problem?
package main
import (
"github.com/fvosberg/elastic-go-testing/customer"
"testing"
"time"
)
func TestRegistration(t *testing.T) {
testCustomer := customer.Customer{Email: "testing#test.de"}
testCustomer.Create()
time.Sleep(time.Second * 1)
_, err := customer.FindByEmail("testing#test.de")
if err != nil {
t.Logf("Error occured: %+v\n", err)
t.Fail()
} else {
t.Log("Found customer testing#test.de")
}
}
Elasticsearch has a flush command that is useful for this situation. Since you're using the elastic project as an interface, you can use the following (where client is your ES client):
...
testCustomer.Create()
res, err := client.Flush().Do()
if err != nil {
t.Fatal(err)
}
_, err := customer.FindByEmail("testing#test.de")
...