Testing Elasticsearch in Golang without sleep - testing

I am really new to Golang and I have a question regarding to testing.
I had a test where I wanted to check whether the persisting of a customer in elasticsearch works or not. I've reduced the code to the critical part and posted it on github: (https://github.com/fvosberg/elastic-go-testing)
The problem is, that I have to wait for elasticsearch to index the new document, before I can search for it. Is there another option than waiting a second for this to happen? This feels very ugly, but I don't know how I can test the integration (working with elasticsearch with lowercasing the email address ...) in another way.
Are there solutions for this problem?
package main
import (
"github.com/fvosberg/elastic-go-testing/customer"
"testing"
"time"
)
func TestRegistration(t *testing.T) {
testCustomer := customer.Customer{Email: "testing#test.de"}
testCustomer.Create()
time.Sleep(time.Second * 1)
_, err := customer.FindByEmail("testing#test.de")
if err != nil {
t.Logf("Error occured: %+v\n", err)
t.Fail()
} else {
t.Log("Found customer testing#test.de")
}
}

Elasticsearch has a flush command that is useful for this situation. Since you're using the elastic project as an interface, you can use the following (where client is your ES client):
...
testCustomer.Create()
res, err := client.Flush().Do()
if err != nil {
t.Fatal(err)
}
_, err := customer.FindByEmail("testing#test.de")
...

Related

How to check a log/output in go test?

I have this function that logs the error in some cases:
func readByte(/*...*/){
// ...
if err != nil {
fmt.Println("ERROR")
log.Print("Couldn't read first byte")
return
}
// ...
}
Now, in the test file, I want to check the output error from this function:
c.Assert(OUTPUT, check.Matches, "teste")
How can I access the log? I tried to put a buffer but it didn't work. What is the right way to catch this log without change my readByte function code?
For example,
readbyte_test.go:
package main
import (
"bytes"
"fmt"
"io"
"log"
"os"
"testing"
)
func readByte( /*...*/ ) {
// ...
err := io.EOF // force an error
if err != nil {
fmt.Println("ERROR")
log.Print("Couldn't read first byte")
return
}
// ...
}
func TestReadByte(t *testing.T) {
var buf bytes.Buffer
log.SetOutput(&buf)
defer func() {
log.SetOutput(os.Stderr)
}()
readByte()
t.Log(buf.String())
}
Output:
$ go test -v readbyte_test.go
=== RUN TestReadByte
ERROR
--- PASS: TestReadByte (0.00s)
readbyte_test.go:30: 2017/05/22 16:41:00 Couldn't read first byte
PASS
ok command-line-arguments 0.004s
$
Answer for Concurrent Tests
If your test is running concurrently (for example, when testing an http Server or Client), you may encounter a race between writing to the buffer and reading from it. Instead of the buffer, we can redirect output to an os.Pipe and use a bufio.Scanner to block until output has been written by using the Scan() method.
Here is an example of creating an os.Pipe and setting the stdlib log package to use the pipe. Note my use of the testify/assert package here:
func mockLogger(t *testing.T) (*bufio.Scanner, *os.File, *os.File) {
reader, writer, err := os.Pipe()
if err != nil {
assert.Fail(t, "couldn't get os Pipe: %v", err)
}
log.SetOutput(writer)
return bufio.NewScanner(reader), reader, writer
}
The *os.File objects are returned so they can be properly closed with a deferred function. Here I'm just printing to stdout since if there was some strange error on close I personally wouldn't want to fail the test. However, this could easily be another call to t.Errorf or similar if you wanted:
func resetLogger(reader *os.File, writer *os.File) {
err := reader.Close()
if err != nil {
fmt.Println("error closing reader was ", err)
}
if err = writer.Close(); err != nil {
fmt.Println("error closing writer was ", err)
}
log.SetOutput(os.Stderr)
}
And then in your test you would have this pattern:
scanner, reader, writer := mockLogger(t) // turn this off when debugging or developing as you will miss output!
defer resetLogger(reader, writer)
// other setup as needed, getting some value for thing below
go concurrentAction()
scanner.Scan() // blocks until a new line is written to the pipe
got := scanner.Text() // the last line written to the scanner
msg := fmt.Sprintf("your log message with thing %v you care about", thing)
assert.Contains(t, got, msg)
And finally, the concurrentAction() function is calling a log function (or method if using a log.logger, the package actually behaves the same way with log.SetOutput() call above either way) like:
// doing something, getting value for thing
log.Printf("your log message with the thing %v you care about", thing)

Effect of duplicate Redis subscription to same channel name

To subscribe to an instance of StackExchange.Redis.ISubscriber one needs to call the following API:
void Subscribe(RedisChannel channel, Action<RedisChannel, RedisValue> handler, CommandFlags flags = CommandFlags.None);
Question is, what happens if one calls this same line of code with the same channel name as a simple string, say "TestChannel"?
Does ISubscriber check for string equality or it just does not care and therefore we will have two subscriptions?
I am making an assumption that your question is targeted at the Redis API itself. Please let me know if it isn't.
The answer is also based on the assumption that you are using a single redis client connection.
The pubsub map is a hashtable.
To answer your question: If you subscribe multiple times with the same string, you will continue to have only one subscription(you can see that the subscribe happens based on the hashtable here: https://github.com/antirez/redis/blob/3.2.6/src/pubsub.c#L64.
Conversely, calling a single unsubscribe will unsubscribe your other subscriptions for that channel/pattern as well.
If it helps, here is a simple example in Go (I have used the go-redis library) that illustrates the unsubscribe and hashtable storage parts of the answer.
package main
import (
"fmt"
"log"
"time"
"github.com/go-redis/redis"
)
func main() {
cl := redis.NewClient((&redis.Options{
Addr: "127.0.0.1:6379",
PoolSize: 1,
}))
ps := cl.Subscribe()
err := ps.Subscribe("testchannel")
if err != nil {
log.Fatal(err)
}
err = ps.Subscribe("testchannel")
if err != nil {
log.Fatal(err)
}
err = ps.Unsubscribe("testchannel")
if err != nil {
log.Fatal(err)
}
go func() {
msg, err := ps.ReceiveMessage()
if err != nil {
log.Fatal(err)
}
fmt.Println(msg.Payload)
}()
err = cl.Publish("testchannel", "some value").Err()
if err != nil {
log.Fatal(err)
}
time.Sleep(10 * time.Second)
}
A channel may have multiple subscribers. All client who subscribe to the same channel will receive the messages published on this given channel.

Unable to get a list of mattermost channels in golang

I am trying to create a bot and retrieve the list of channels.
I used the bot example in repository and it is mostly working, except for the part where it has to get the list of channels.
Either I am doing something silly or GetChannels API really does not work the way it is described in bot_sample.go .
I made a smaller separate function to test that part.
Adding code here for better readability:
func mattermostPrintChannels(client *mattermost.Client) {
channelsResult, err := client.GetChannels("")
if err != nil {
fmt.Print("Couldn't get channels: ", err)
return
}
channelList := channelsResult.Data.(*mattermost.ChannelList)
fmt.Print("Channels:")
for _, channel := range channelList.Channels {
fmt.Printf("%s -> %s", channel.Id, channel.DisplayName)
}
}
This code gives me the error:
./mattermost.go:30: channelList.Channels undefined (type
*model.ChannelList has no field or method Channels)
Now if I just print the contents of ChannelList variable (using spew), I get the following:
channelList: : ([]interface {}) (len=1 cap=1) {
(*model.ChannelList)(<nil>)
}
JimB is correct. The model.ChannelList type used to be a struct, but it recently changed to []*model.Channel. You'll want to change
for _, channel := range channelList.Channels {
to
for _, channel := range *channelList {

How to test transaction rollback and commit in go lang

I have such code:
tx, _ := db.Begin()
defer tx.Rollback()
err := db.Insert(foo)
err = db.Delete(bar)
if !err {
tx.Commit()
}
and I don't have idea how to write 2 test cases:
successfull (data inserted and deleted)
error (nothing changes)
I was thinking about:
monkey patching by function injection to method which is doing db operations, and change this function in test
monkey patching by changing foo sql by making it global - I don't like it too much
make db not allowing delete operation for test time
Each of above options seems to be not ideal, how should I write this test cases?
Have a look at my library dbwrap https://github.com/metakeule/dbwrap
that implements a driver.Driver, wrapping around another driver.
It also has a fake driver that you can use like this.
package main
import (
"fmt"
"github.com/metakeule/dbwrap"
)
var fake, db = dbwrap.NewFake()
func q1() {
fake.SetNumInputs(1)
db.Query("Select ?", "hiho")
q, v := fake.LastQuery()
fmt.Println(q, v)
}
Use the source code of fake.go as a starting point.

How to test concurrency and locking in golang?

We are trying to test locks. Basically, there are multiple clients trying to obtain a lock on a particular key. In the example below, we used the key "x".
I don't know how to test whether the locking is working. I can only read the logs to determine whether it is working.
The correct sequence of events should be:
client1 obtains lock on key "x"
client2 tries to obtain lock on key "x" (fmt.Println("2 getting lock")) - but is blocked and waits
client1 releases lock on key "x"
client2 obtains lock on key "x"
Q1: How could I automate the process and turn this into a test?
Q2: What are some of the tips to testing concurrency / mutex locking in general?
func TestLockUnlock(t *testing.T) {
client1, err := NewClient()
if err != nil {
t.Error("Unexpected new client error: ", err)
}
fmt.Println("1 getting lock")
id1, err := client1.Lock("x", 10*time.Second)
if err != nil {
t.Error("Unexpected lock error: ", err)
}
fmt.Println("1 got lock")
go func() {
client2, err := NewClient()
if err != nil {
t.Error("Unexpected new client error: ", err)
}
fmt.Println("2 getting lock")
id2, err := client2.Lock("x", 10*time.Second)
if err != nil {
t.Error("Unexpected lock error: ", err)
}
fmt.Println("2 got lock")
fmt.Println("2 releasing lock")
err = client2.Unlock("x", id2)
if err != nil {
t.Error("Unexpected Unlock error: ", err)
}
fmt.Println("2 released lock")
err = client2.Close()
if err != nil {
t.Error("Unexpected connection close error: ", err)
}
}()
fmt.Println("sleeping")
time.Sleep(2 * time.Second)
fmt.Println("finished sleeping")
fmt.Println("1 releasing lock")
err = client1.Unlock("x", id1)
if err != nil {
t.Error("Unexpected Unlock error: ", err)
}
fmt.Println("1 released lock")
err = client1.Close()
if err != nil {
t.Error("Unexpected connection close error: ", err)
}
time.Sleep(5 * time.Second)
}
func NewClient() *Client {
....
}
func (c *Client) Lock(lockKey string, timeout time.Duration) (lockId int64, err error){
....
}
func (c *Client) Unlock(lockKey string) err error {
....
}
Concurrency testing of lock-based code is hard, to the extent that provable-correct solutions are difficult to come by. Ad-hoc manual testing via print statements is not ideal.
There are four dynamic concurrency problems that are essentially untestable (more). Along with the testing of performance, a statistical approach is the best you can achieve via test code (e.g. establishing that the 90 percentile performance is better than 10ms or that deadlock is less than 1% likely).
This is one of the reasons that the Communicating Sequential Process (CSP) approach provided by Go is better to use than locks on share memory. Consider that your Goroutine under test provides a unit with specified behaviour. This can be tested against other Goroutines that provide the necessary test inputs via channels and monitor result outputs via channels.
With CSP, using Goroutines without any shared memory (and without any inadvertently shared memory via pointers) will guarantee that race conditions don't occur in any data accesses. Using certain proven design patterns (e.g. by Welch, Justo and WIllcock) can establish that there won't be deadlock between Goroutines. It then remains to establish that the functional behaviour is correct, for which the Goroutine test-harness mentioned above will do nicely.