I have some SQL queries that do not change on every request (only it's parameter). So, instead of doing this for each request:
func HandleRequest() {
rows, err := db.Query(sqlQuery, params...)
// do something with data
}
Is it okay if for each reqest I do this instead:
// together with server initialization
stmt, err := db.Prepare(sqlQuery)
func HandleRequest() {
rows, err := stmt.Query(params...)
// do something with data
}
As the documentation of DB.Prepare() states:
Multiple queries or executions may be run concurrently from the returned statement.
It is safe for concurrent use, although the intended use for prepared statements is not to share them between multiple requests. The main reason is that a prepared statement (may) allocate resources in the DB server itself, and it's not freed until you call the Close() method of the returned statement. So I'd advise against it.
The typical use case is if you have to run the same statement multiple times with different parameters, such as the example in the documentation:
projects := []struct {
mascot string
release int
}{
{"tux", 1991},
{"duke", 1996},
{"gopher", 2009},
{"moby dock", 2013},
}
stmt, err := db.Prepare("INSERT INTO projects(id, mascot, release, category) VALUES( ?, ?, ?, ? )")
if err != nil {
log.Fatal(err)
}
defer stmt.Close() // Prepared statements take up server resources and should be closed after use.
for id, project := range projects {
if _, err := stmt.Exec(id+1, project.mascot, project.release, "open source"); err != nil {
log.Fatal(err)
}
}
Related
i am trying to insert data after the connection, when i command the logic of INSERT... i was able to connect to the database, but when i uncommand them , i got error
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x40f8e2a]
here is my function :
func Connect() (*sql.DB, error) {
db, err := sql.Open("postgres", os.Getenv("PG_URL"))
if err != nil {
return nil, err
}
defer db.Close()
stmt, _ := db.Prepare("INSERT INTO users(name, email, password) VALUES(?,?,?)")
res, err := stmt.Exec("test", "test#mail.com", "12344")
if err != nil{
panic(err.Error())
}
fmt.Println(res)
fmt.Println("Successfully connected!")
return db, nil
}
I have tried to do the same thing also like this article go sql
and have the same issue
do I wrong implement this?
I bet a dollar/euro/frank that the NPE is on the line executing the prepared statement and that if you check the only error you ignored it won't be nil and it will tell you what's wrong.
I had the same problem with sqlite.
As Ivaylo Novakov described in his answer I had to log the err of the prepare statement (which i ignored like you before stmt, _)
For me it was running okay as long as i was developing but when I created my final binary i forgot to enable cgo).
The err got the hint:
Binary was compiled with 'CGO_ENABLED=0', go-sqlite3 requires cgo to work. This is a stub
Background
I am using the github.com/jmoiron/sqlx golang package with a Postgres database.
I have the following wrapper function to run SQL code in a transaction:
func (s *postgresStore) runInTransaction(ctx context.Context, fn func(*sqlx.Tx) error) error {
tx, err := s.db.Beginx()
if err != nil {
return err
}
defer func() {
if err != nil {
tx.Rollback()
return
}
err = tx.Commit()
}()
err = fn(tx)
return err
}
Given this, consider the following code:
func (s *store) SampleFunc(ctx context.Context) error {
err := s.runInTransaction(ctx,func(tx *sqlx.Tx) error {
// Point A: Do some database work
if err := tx.Commit(); err != nil {
return err
}
// Point B: Do some more database work, which may return an error
})
}
Desired behavior
If there is an error at Point A, then the transaction should have done zero work
If there is an error at Point B, then the transaction should still have completed the work at Point A.
Problem with current code
The code does not work as intended at the moment, because I am committing the transaction twice (once in runInTransaction, once in SampleFunc).
A Possible Solution
Where I commit the transaction, I could instead run something like tx.Exec("SAVEPOINT my_savepoint"), then defer tx.Exec("ROLLBACK TO SAVEPOINT my_savepoint")
After the code at Point B, I could run: tx.Exec("RELEASE SAVEPOINT my_savepoint")
So, if the code at Point B runs without error, I will fail to ROLLBACK to my savepoint.
Problems with Possible Solution
I'm not sure if using savepoints will mess with the database/sql package's behavior. Also, my solution seems a bit messy -- surely there is a cleaner way to do this!
Multiple transactions
You can split your work in two transactions:
func (s *store) SampleFunc(ctx context.Context) error {
err := s.runInTransaction(ctx,func(tx *sqlx.Tx) error {
// Point A: Do some database work
})
if err != nil {
return err
}
return s.runInTransaction(ctx,func(tx *sqlx.Tx) error {
// Point B: Do some more database work, which may return an error
})
}
I had the problem alike: I had a lots of steps in one transaction.
After starting transaction:
BEGIN
In loop:
SAVEPOINT s1
Some actions ....
If I get an error: ROLLBACK TO SAVEPOINT s1
If OK go to next step
Finally COMMIT
This approach gives me ability to perform all steps one-by-one. If some steps got failed I can throw away only them, keeping others. And finally commit all "good" work.
I am really new to Golang and I have a question regarding to testing.
I had a test where I wanted to check whether the persisting of a customer in elasticsearch works or not. I've reduced the code to the critical part and posted it on github: (https://github.com/fvosberg/elastic-go-testing)
The problem is, that I have to wait for elasticsearch to index the new document, before I can search for it. Is there another option than waiting a second for this to happen? This feels very ugly, but I don't know how I can test the integration (working with elasticsearch with lowercasing the email address ...) in another way.
Are there solutions for this problem?
package main
import (
"github.com/fvosberg/elastic-go-testing/customer"
"testing"
"time"
)
func TestRegistration(t *testing.T) {
testCustomer := customer.Customer{Email: "testing#test.de"}
testCustomer.Create()
time.Sleep(time.Second * 1)
_, err := customer.FindByEmail("testing#test.de")
if err != nil {
t.Logf("Error occured: %+v\n", err)
t.Fail()
} else {
t.Log("Found customer testing#test.de")
}
}
Elasticsearch has a flush command that is useful for this situation. Since you're using the elastic project as an interface, you can use the following (where client is your ES client):
...
testCustomer.Create()
res, err := client.Flush().Do()
if err != nil {
t.Fatal(err)
}
_, err := customer.FindByEmail("testing#test.de")
...
We are trying to test locks. Basically, there are multiple clients trying to obtain a lock on a particular key. In the example below, we used the key "x".
I don't know how to test whether the locking is working. I can only read the logs to determine whether it is working.
The correct sequence of events should be:
client1 obtains lock on key "x"
client2 tries to obtain lock on key "x" (fmt.Println("2 getting lock")) - but is blocked and waits
client1 releases lock on key "x"
client2 obtains lock on key "x"
Q1: How could I automate the process and turn this into a test?
Q2: What are some of the tips to testing concurrency / mutex locking in general?
func TestLockUnlock(t *testing.T) {
client1, err := NewClient()
if err != nil {
t.Error("Unexpected new client error: ", err)
}
fmt.Println("1 getting lock")
id1, err := client1.Lock("x", 10*time.Second)
if err != nil {
t.Error("Unexpected lock error: ", err)
}
fmt.Println("1 got lock")
go func() {
client2, err := NewClient()
if err != nil {
t.Error("Unexpected new client error: ", err)
}
fmt.Println("2 getting lock")
id2, err := client2.Lock("x", 10*time.Second)
if err != nil {
t.Error("Unexpected lock error: ", err)
}
fmt.Println("2 got lock")
fmt.Println("2 releasing lock")
err = client2.Unlock("x", id2)
if err != nil {
t.Error("Unexpected Unlock error: ", err)
}
fmt.Println("2 released lock")
err = client2.Close()
if err != nil {
t.Error("Unexpected connection close error: ", err)
}
}()
fmt.Println("sleeping")
time.Sleep(2 * time.Second)
fmt.Println("finished sleeping")
fmt.Println("1 releasing lock")
err = client1.Unlock("x", id1)
if err != nil {
t.Error("Unexpected Unlock error: ", err)
}
fmt.Println("1 released lock")
err = client1.Close()
if err != nil {
t.Error("Unexpected connection close error: ", err)
}
time.Sleep(5 * time.Second)
}
func NewClient() *Client {
....
}
func (c *Client) Lock(lockKey string, timeout time.Duration) (lockId int64, err error){
....
}
func (c *Client) Unlock(lockKey string) err error {
....
}
Concurrency testing of lock-based code is hard, to the extent that provable-correct solutions are difficult to come by. Ad-hoc manual testing via print statements is not ideal.
There are four dynamic concurrency problems that are essentially untestable (more). Along with the testing of performance, a statistical approach is the best you can achieve via test code (e.g. establishing that the 90 percentile performance is better than 10ms or that deadlock is less than 1% likely).
This is one of the reasons that the Communicating Sequential Process (CSP) approach provided by Go is better to use than locks on share memory. Consider that your Goroutine under test provides a unit with specified behaviour. This can be tested against other Goroutines that provide the necessary test inputs via channels and monitor result outputs via channels.
With CSP, using Goroutines without any shared memory (and without any inadvertently shared memory via pointers) will guarantee that race conditions don't occur in any data accesses. Using certain proven design patterns (e.g. by Welch, Justo and WIllcock) can establish that there won't be deadlock between Goroutines. It then remains to establish that the functional behaviour is correct, for which the Goroutine test-harness mentioned above will do nicely.
From the specs of Prepare() I thought I can use an sql query with Prepare() like this:
st, err := db.Prepare("SELECT name FROM pet WHERE name=?", "Fluffy")
But I get this error:
# command-line-arguments
.\dbtest2.go:25: too many arguments in call to db.Prepare
This is the only example I could find using Prepare() but he does not use queries with parameters. How do I use Prepare()?
Look further down the example script that you linked to, and you find this...
st, err := db.Prepare("INSERT INTO document (title) VALUES (?)")
if err != nil{
fmt.Print( err );
os.Exit(1)
}
st.Exec("Hello Again")
st.Close()