database/sql Tx - detecting Commit or Rollback - sql

Using the database/sql and driver packages and Tx, it is not possible it appears to detect whether a transaction has been committed or rolled-back without attempting another and receiving an error as a result, and then examining the error to determine the type of error. I would like to be able to determine from the Tx object whether committed or not. Sure, I can define and set another variable in the function that uses Tx, but I have quite a number of them, and it is times 2 every time (variable and assignment). I also have a deferred function to do a Rollback if needed, and it needs to be passed the bool variable.
Would it be acceptable to set the Tx variable to nil after a Commit or Rollback, and will the GC recover any memory, or is that a no-no, or is there a better alternative?

You want to make sure that Begin(), Commit(), and Rollback() appear within the same function. It makes transactions easier to track, and lets you ensure they are closed properly by using a defer.
Here is an example of this, which does a Commit or Rollback depending on whether an error is returned:
func (s Service) DoSomething() (err error) {
tx, err := s.db.Begin()
if err != nil {
return
}
defer func() {
if err != nil {
tx.Rollback()
return
}
err = tx.Commit()
}()
if _, err = tx.Exec(...); err != nil {
return
}
if _, err = tx.Exec(...); err != nil {
return
}
// ...
return
}
This can get a bit repetitive. Another way of doing this is by wrapping your transactions using a transaction handler:
func Transact(db *sql.DB, txFunc func(*sql.Tx) error) (err error) {
tx, err := db.Begin()
if err != nil {
return
}
defer func() {
if p := recover(); p != nil {
tx.Rollback()
panic(p) // re-throw panic after Rollback
} else if err != nil {
tx.Rollback() // err is non-nil; don't change it
} else {
err = tx.Commit() // err is nil; if Commit returns error update err
}
}()
err = txFunc(tx)
return err
}
Using the transaction hander above, I can do this:
func (s Service) DoSomething() error {
return Transact(s.db, func (tx *sql.Tx) error {
if _, err := tx.Exec(...); err != nil {
return err
}
if _, err := tx.Exec(...); err != nil {
return err
}
return nil
})
}
This keeps my transactions succinct and ensures by transactions are properly handled.
In my transaction handler I use recover() to catch panics to ensure a Rollback happens right away. I re-throw the panic to allow my code to catch it if a panic is expected. Under normal circumstances a panic should not occur. Errors should be returned instead.
If we did not handle panics the transaction would be rolled back eventually. A non-commited transaction gets rolled back by the database when the client disconnects or when the transaction gets garbage collected. However, waiting for the transaction to resolve on its own could cause other (undefined) issues. So it's better to resolve it as quickly as possible.
One thing that may not be immediately clear is that defer can change the return value within a closure if the return variable is captured. In the transaction handler the transaction is committed when err (the return value) is nil. The call to Commit can also return an error, so we set its return to err with err = tx.Commit(). We do not do the same with Rollback because err is non-nil and we do not want to overwrite the existing error.

Related

How to intercept `rollback` in gorm?

I need to execute some things after all create executions fail.
It seems that callbacks can be satisfied, but there is a case that if it is an operation in a transaction, it may not actually be executed. I need to do it after rollback Treat accordingly. So the question is, how do I intercept rollback?
You can use manual transaction in a function like this.
func CreateAnimals(db *gorm.DB) error {
// Note the use of tx as the database handle once you are within a transaction
tx := db.Begin()
defer func() {
if r := recover(); r != nil {
tx.Rollback()
}
}()
if err := tx.Error; err != nil {
return err
}
if err := tx.Create(&Animal{Name: "Giraffe"}).Error; err != nil {
tx.Rollback()
return err
}
if err := tx.Create(&Animal{Name: "Lion"}).Error; err != nil {
tx.Rollback()
return err
}
return tx.Commit().Error
}
If CreateAnimals fails then you can do your desired job.

SQL Next not advancing cursor

I have a function that I used to iterate over a result set from a query:
func readRows(rows *sql.Rows, translator func(*sql.Rows) error) error {
defer rows.Close()
// Iterate over each row in the rows and scan each; if an error occurs then return
for shouldScan := rows.Next(); shouldScan; {
if err := translator(rows); err != nil {
return err
}
}
// Check if the rows had an error; if they did then return them. Otherwise,
// close the rows and return an error if the close function fails
if err := rows.Err(); err != nil {
return err
}
return nil
}
The translator function is primarily responsible for calling Scan on the *sql.Rows object. An example of this is:
readRows(rows, func(scanner *sql.Rows) error {
var entry gopb.TestObject
// Embed the variables into a list that we can use to pull information out of the rows
scanned := []interface{}{...}
if err := scanner.Scan(scanned...); err != nil {
return err
}
entries = append(entries, &entry)
return nil
})
I wrote a unit test for this code:
// Create the SQL mock and the RDS reqeuster
db, mock, _ := sqlmock.New()
requester := Requester{conn: db}
defer db.Close()
// Create the rows we'll use for testing the query
rows := sqlmock.NewRows([]string{"id", "data"}).
AddRow(0, "data")
// Verify the command order for the transaction
mock.ExpectBegin()
mock.ExpectQuery(regexp.QuoteMeta("SELECT `id`, `data`, FROM `data`")).WillReturnRows(rows)
mock.ExpectRollback()
// Attempt to get the data
data, err := requester.GetData(context.TODO())
However, it appears that Next is being called infinitely. I'm not sure if this is an sqlmock issue or an issue with my code. Any help would be appreciated.

Connection leak in lib/pq postgres driver

Given that db is of type *sql.DB (uging lib/pq driver), the following code causes connection leak:
rows, err := db.Query(
"select 1 from things where id = $1",
thing,
)
if err != nil {
return nil, fmt.Errorf("can't select thing (%d): %w", thing, err)
}
found := false
for rows.Next() {
found = true
break
}
Calling this code repeatedly increases the number of open connections, until exhausted:
select sum(numbackends) from pg_stat_database;
// 5
// 6
// 7
// ...
// 80
How do I fix it?
There are a couple problems with your code as written. The direct answer to your question of avoiding connection leaks is to close the rows iterator as mentioned in the documentation. The normal way to call it is in a defer statement:
rows, err := db.Query(
"select 1 from things where id = $1",
thing,
)
if err != nil {
return nil, fmt.Errorf("can't select thing (%d): %w", thing, err)
}
defer rows.Close()
found := false
for rows.Next() {
found = true
break
}
Second, since all you ever care about is a single result, there's no reason to fetch a multi-row result set at all, which will implicitly solve the connection leak issue, as well. See this post for a discussion on the quickest way to check for existence in Postgres. If we adapt that here:
row, err := db.QueryRow(
"select EXISTS(SELECT 1 from things where id = $1)",
thing,
)
if err != nil {
return nil, fmt.Errorf("can't select thing (%d): %w", thing, err)
}
var found bool
if err := row.Scan(&found); err != nil {
return nil, fmt.Errorf("Failed to scan result: %w", err)
}

How to determine name of database driver I'm using?

In code which tries to be database agnostic, I would like to perform some database specific queries, so I need to know name of Database Driver in Go language:
db,err := sql.Open(dbstr, dbconnstr)
if err != nil {
log.Fatal(err)
}
errp := db.Ping()
if errp != nil {
log.Fatal(errp)
}
log.Printf("%s\n", db.Driver())
How I can determine name of database driver I'm using?
Give your database string in url format like postgres://postgres#localhost:5432/db_name?sslmode=disable.
And then find the database type you are using Parse function of url package. Based on the database type, run db specific queries.
func New(url string) (Driver, error) {
u, err := neturl.Parse(url)
if err != nil {
return nil, err
}
switch u.Scheme {
case "postgres":
d := &postgres.Driver{}
if err := d.Initialize(url); err != nil {
return nil, err
}
return d, nil
case "mysql":
d := &mysql.Driver{}
if err := d.Initialize(url); err != nil {
return nil, err
}
return d, nil
case "bash":
d := &bash.Driver{}
if err := d.Initialize(url); err != nil {
return nil, err
}
return d, nil
case "cassandra":
d := &cassandra.Driver{}
if err := d.Initialize(url); err != nil {
return nil, err
}
return d, nil
case "sqlite3":
d := &sqlite3.Driver{}
if err := d.Initialize(url); err != nil {
return nil, err
}
return d, nil
default:
return nil, errors.New(fmt.Sprintf("Driver '%s' not found.", u.Scheme))
}
}
You should already know the name of the database driver because its represented by the parameter you identified with the dbstr variable.
db, err := sql.Open("postgres", "user= ... ")
if err != nil {
log.Fatal(err)
}
db.Driver() correctly returns the underlying driver in use, but you are formatting it as string (because of %s). If you change %s with %T you will see that it correctly prints out the type:
log.Printf("%T\n", db.Driver())
For example, if you use github.com/lib/pq, the output is *pq.drv. This is the same of using the reflect package:
log.Printf("%s\n", reflect.TypeOf(db.Driver()))
It may be impractical to use that value for performing conditional executions. Moreover, the Driver interface doesn't specify any way to get the specific driver information, except the Open() function.
If you have specific needs, you may want to either use the driver name passed when you open the connection, or create specific drivers that delegate to the original ones and handle your custom logic.

Go ioutil using too many file descriptors/leak?

I am going through a list of files and Unmarshalling the xml data in them into an array of structs rArray. I intend to process about 18000 files. When I get to about 1300 files processed, the program panics and says that too many files are open. If I limit the amount of files processed to a safe amount of 1000, the program does not crash.
As seen below, I am using ioutil.ReadFile to read the file data.
for _, f := range files {
func() {
data, err := ioutil.ReadFile("./" + recordDir + "/" + f.Name())
if err != nil {
fmt.Println("error reading %v", err)
return
} else {
if (strings.Contains(filepath.Ext(f.Name()), "xml")) {
//unmarshal data and put into struct array
err = xml.Unmarshal([]byte(data), &rArray[a])
if err != nil {
fmt.Println("error decoding %v: %v",f.Name(), err)
return
}
}
}
}()
}
I am not sure if Go is using too many file descriptors or not closing the files fast enough.
After reading https://groups.google.com/forum/#!topic/golang-nuts/7yXXjgcOikM and viewing the ioutil source in http://golang.org/src/pkg/io/ioutil/ioutil.go, the code for ioutil.ReadFile shows that it uses defer to close the file. defer runs when calling function is returned and ReadFile() is the calling function. Am I correct in this understanding?
I also tried wrapping the ioutil.ReadFile part of my code in a function, but it makes no difference.
My ulimit is set to unlimited.
UPDATE:
I believe that the error of too many files is actually occurring during my Unzip function.
func Unzip(src, dest string) error {
r, err := zip.OpenReader(src)
if err != nil {
return err
}
for _, f := range r.File {
rc, err := f.Open()
if err != nil {
panic(err)
}
path := filepath.Join(dest, f.Name)
if f.FileInfo().IsDir() {
os.MkdirAll(path, f.Mode())
} else {
f, err := os.OpenFile(
path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, f.Mode())
if err != nil {
panic(err)
}
_, err = io.Copy(f, rc)
if err != nil {
panic(err)
}
f.Close()
}
rc.Close()
}
r.Close()
return nil
}
I initially got the Unzip function from https://gist.github.com/hnaohiro/4572580, but upon further inspection, the use of defer in the gist author's function seemed wrong as the file would only be closed after the Unzip() function returned which is too late becuase then 18000 file descriptors will be open. ;)
I replaced the deferred Closes with explicit Close() as shown above, but am still getting the same "too many open files" error. Is there a problem with my modified Unzip function?
UPDATE # 2
Oops, I was running this on Heroku and was pushing to the wrong app with my changes this entire time. Lesson learned: verify target app in heroku toolbelt.
Unzip code from https://gist.github.com/hnaohiro/4572580 does not work as it does not close files until all files processed.
My unzip code with explicit close above works and so does the defer version in #peterSO's answer.
I would modify the Unzip function from https://gist.github.com/hnaohiro/4572580 to the following:
package main
import (
"archive/zip"
"io"
"log"
"os"
"path/filepath"
)
func unzipFile(f *zip.File, dest string) error {
rc, err := f.Open()
if err != nil {
return err
}
defer rc.Close()
path := filepath.Join(dest, f.Name)
if f.FileInfo().IsDir() {
err := os.MkdirAll(path, f.Mode())
if err != nil {
return err
}
} else {
f, err := os.OpenFile(
path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, f.Mode())
if err != nil {
return err
}
defer f.Close()
_, err = io.Copy(f, rc)
if err != nil {
return err
}
}
return nil
}
func Unzip(src, dest string) error {
r, err := zip.OpenReader(src)
if err != nil {
return err
}
defer r.Close()
for _, f := range r.File {
err := unzipFile(f, dest)
if err != nil {
return err
}
}
return nil
}
func main() {
err := Unzip("./sample.zip", "./out")
if err != nil {
log.Fatal(err)
}
}