SQL Next not advancing cursor - sql

I have a function that I used to iterate over a result set from a query:
func readRows(rows *sql.Rows, translator func(*sql.Rows) error) error {
defer rows.Close()
// Iterate over each row in the rows and scan each; if an error occurs then return
for shouldScan := rows.Next(); shouldScan; {
if err := translator(rows); err != nil {
return err
}
}
// Check if the rows had an error; if they did then return them. Otherwise,
// close the rows and return an error if the close function fails
if err := rows.Err(); err != nil {
return err
}
return nil
}
The translator function is primarily responsible for calling Scan on the *sql.Rows object. An example of this is:
readRows(rows, func(scanner *sql.Rows) error {
var entry gopb.TestObject
// Embed the variables into a list that we can use to pull information out of the rows
scanned := []interface{}{...}
if err := scanner.Scan(scanned...); err != nil {
return err
}
entries = append(entries, &entry)
return nil
})
I wrote a unit test for this code:
// Create the SQL mock and the RDS reqeuster
db, mock, _ := sqlmock.New()
requester := Requester{conn: db}
defer db.Close()
// Create the rows we'll use for testing the query
rows := sqlmock.NewRows([]string{"id", "data"}).
AddRow(0, "data")
// Verify the command order for the transaction
mock.ExpectBegin()
mock.ExpectQuery(regexp.QuoteMeta("SELECT `id`, `data`, FROM `data`")).WillReturnRows(rows)
mock.ExpectRollback()
// Attempt to get the data
data, err := requester.GetData(context.TODO())
However, it appears that Next is being called infinitely. I'm not sure if this is an sqlmock issue or an issue with my code. Any help would be appreciated.

Related

Can I return rows result from db without scan it first in Golang func?

I have mini project using Golang, my plan is make a base function which it will be called from Model to execute sql query, then return the rows result without Scan it first. I'm using this way to prevent forget write defer rows.Close() and the code for execute the Query in model more simple. I had tried this way, but when try to print the result, I got nil without any error. here my screenshoot. The result exists when the query executed and the rows result scanned are in same function. Maybe I miss something? This is my first question, sorry it's too long. Thank you
The base model where the SQL query will be executed and return the result
package model
import "database/sql"
import "hb-backend-v1/config/database"
import "fmt"
func Query(query string) (*sql.Rows, error){
connect, err := database.Connect()
if err != nil{
fmt.Println("Connection Failed")
return nil, err
}
fmt.Println("Connection Success")
defer connect.Close()
rows, err := connect.Query(query)
defer rows.Close()
if err != nil{
return nil, err
}
return rows, nil
}
This is where the base model will be called and give the result
package product
import "database/sql"
import _"fmt"
import "hb-backend=v1/model"
type Hasil struct{
Id_alamat_store int
Id_tk int
Alamat string
Id_wil int
Latitude sql.NullString
Longitude sql.NullString
}
func ProductList() ([]Hasil, error){
rows, err := model.Query("SELECT * FROM alamat_store")
if err != nil{
return nil, err
}
var result []Hasil
for rows.Next(){
var each = Hasil{}
var err = rows.Scan(&each.Id_alamat_store, &each.Id_tk, &each.Alamat, &each.Id_wil, &each.Latitude, &each.Longitude)
if err != nil{
return nil, err
}
result = append(result, each)
}
return result, nil
}
Both connection and rows will be closed once Query exits, after those two are closed you can't use rows anymore.
One approach to get around that would be to pass a closure to Query and have Query execute it before closing the two resources:
func Query(query string, scan func(*sql.Rows) error) error {
connect, err := database.Connect()
if err != nil{
return err
}
defer connect.Close()
rows, err := connect.Query(query)
if err != nil{
return err
}
defer rows.Close()
return scan(rows)
}
func ProductList() ([]Hasil, error) {
var result []Hasil
err := model.Query("SELECT * FROM alamat_store", func(rows *sql.Rows) error {
for rows.Next() {
var each = Hasil{}
var err = rows.Scan(&each.Id_alamat_store, &each.Id_tk, &each.Alamat, &each.Id_wil, &each.Latitude, &each.Longitude)
if err != nil {
return err
}
result = append(result, each)
}
return nil
})
if err != nil {
return nil, err
}
return result, nil
}

Get sql rows from row

How I can use one parse (Scan) method for *sql.Rows and *sql.Row in Go?
Parse (Scan) methods use one code for parse one row
...
row := r.stmOne.QueryRow(id)
rows, err := r.stmOther.Query(ids, params)
parseRow(row, &item)
for rows.Next(){
parseRows(rows, &item)
}
...
func parseRows(row *sql.Rows, item *typeItem) error {
err := row.Scan(....) /// same
}
func parseRow(row *sql.Row, item *typeItem) error {
err := row.Scan(....) /// same
}
type RowScanner interface {
Scan(dest ...interface{}) error
}
func scanRowIntoItem(row RowScanner, item *typeItem) error {
err := row.Scan(...)
}
row := r.stmOne.QueryRow(id)
rows, err := r.stmOther.Query(ids, params)
scanRowIntoItem(row, &item)
for rows.Next(){
scanRowIntoItem(rows, &item)
}

How do I create a structure for a dynamic SQL query?

In my Golang application I make SQL request to the database. Usually, in the SQL query, I specify the columns that I want to get from the table and create a structure based on it. You can see an example of the working code below.
QUESTION:
What should I do if I don't know the number and name of columns in the table? For example, I make the SQL request like SELECT * from filters; instead of SELECT FILTER_ID, FILTER_NAME FROM filters;. How do I create a structure in this case?
var GetFilters = func(responseWriter http.ResponseWriter, request *http.Request) {
rows, err := database.ClickHouse.Query("SELECT * FROM filters;"); if err != nil {
fmt.Println(err)
return
}
defer rows.Close()
columns, err := rows.Columns(); if err != nil {
fmt.Println(err)
return
}
filters := make([]interface{}, len(columns))
for i, _ := range columns {
filters[i] = new(sql.RawBytes)
}
for rows.Next() {
if err = rows.Scan(filters...); err != nil {
fmt.Println(err)
return
}
}
utils.Response(responseWriter, http.StatusOK, filters)
}
Well, finally I found the solution. As you can see from the code below first I make SQL request where I do not specify the name of the columns. Then I take information about columns by ColumnTypes() function. This function returns column information such as column type, length and nullable. Next I will learn the name and type of columns, fill interface with these data:
for i, column := range columns {
object[column.Name()] = reflect.New(column.ScanType()).Interface()
values[i] = object[column.Name()]
}
The full code which I use looks like this:
var GetFilters = func(responseWriter http.ResponseWriter, request *http.Request) {
rows, err := database.ClickHouse.Query("SELECT * FROM table_name;"); if err != nil {
fmt.Println(err)
return
}
defer rows.Close()
var objects []map[string]interface{}
for rows.Next() {
columns, err := rows.ColumnTypes(); if err != nil {
fmt.Println(err)
return
}
values := make([]interface{}, len(columns))
object := map[string]interface{}{}
for i, column := range columns {
object[column.Name()] = reflect.New(column.ScanType()).Interface()
values[i] = object[column.Name()]
}
if err = rows.Scan(values...); err != nil {
fmt.Println(err)
return
}
objects = append(objects, object)
}
utils.Response(responseWriter, http.StatusOK, objects)
}
Use the USER_TAB_COLUMNS table to get the list of columns in the executing table query store it an array or collection. later execute the query and Scan the columns that you already know from the previous Query.

Bulk insert copy sql table with golang

For the context, I'm new to go and I'm creating a program that can copy tables from Oracle to MySQL.
I use database/sql go package, so I assume it can be used for migrating any kind of database.
To simplify my question I'm coping on the same MySQL database table name world.city to world.city_copy2.
with my following code, I ended up with the same last values in all the rows in the table :-(
do I somehow need to read through all the values inside the loop? what is the efficient way to do that?
package main
import (
"database/sql"
"fmt"
"strings"
_ "github.com/go-sql-driver/mysql"
)
const (
user = "user"
pass = "testPass"
server = "localhost"
)
func main() {
fmt.Print("test")
conStr := fmt.Sprintf("%s:%s#tcp(%s)/world", user, pass, server)
db, err := sql.Open("mysql", conStr)
if err != nil {
panic(err.Error())
}
defer db.Close()
err = db.Ping()
if err != nil {
panic(err.Error())
}
rows, err := db.Query("SELECT * FROM city")
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
columns, err := rows.Columns()
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
// Make a slice for the values
values := make([]sql.RawBytes, len(columns))
// rows.Scan wants '[]interface{}' as an argument, so we must copy the
// references into such a slice
scanArgs := make([]interface{}, len(values))
for i := range values {
scanArgs[i] = &values[i]
}
// that string will be generated according to len of columns
placeHolders := "( ?, ?, ?, ?, ? )"
// slice will contain all the values at the end
bulkValues := []interface{}{}
valueStrings := make([]string, 0)
for rows.Next() {
// get RawBytes from data
err = rows.Scan(scanArgs...)
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
valueStrings = append(valueStrings, placeHolders)
bulkValues = append(bulkValues, scanArgs...)
//
}
stmStr := fmt.Sprintf("INSERT INTO city_copy2 VALUES %s", strings.Join(valueStrings, ","))
_, err = db.Exec(stmStr, bulkValues...)
if err != nil {
panic(err.Error())
}
}
I have checked out the docs of the library, and it seems that the problem here is that bulkValues keeps the address of the pointer so when scanArgs changes, bulkValues also changes to latest value of that scanArgs.
You need to use the values variable to get the values like below:
func main() {
fmt.Print("test")
conStr := fmt.Sprintf("%s:%s#tcp(%s)/soverflow", user, pass, server)
db, err := sql.Open("mysql", conStr)
if err != nil {
panic(err.Error())
}
defer db.Close()
err = db.Ping()
if err != nil {
panic(err.Error())
}
rows, err := db.Query("SELECT * FROM city")
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
columns, err := rows.Columns()
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
// Make a slice for the values
values := make([]sql.RawBytes, len(columns))
// rows.Scan wants '[]interface{}' as an argument, so we must copy the
// references into such a slice
scanArgs := make([]interface{}, len(values))
for i := range values {
scanArgs[i] = &values[i]
}
// that string will be generated according to len of columns
placeHolders := "( ?, ?, ?, ?, ? )"
// slice will contain all the values at the end
bulkValues := []interface{}{}
valueStrings := make([]string, 0)
// make an interface to keep the record's value
record := make([]interface{}, len(columns))
for rows.Next() {
// get RawBytes from data
err = rows.Scan(scanArgs...)
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
valueStrings = append(valueStrings, placeHolders)
for i, col := range values {
// you need to be carefull with the datatypes here
// check out the docs for details on here
record[i] = string(value)
}
bulkValues = append(bulkValues, record...)
}
stmStr := fmt.Sprintf("INSERT INTO city_copy2 VALUES %s", strings.Join(valueStrings, ","))
_, err = db.Exec(stmStr, bulkValues...)
if err != nil {
panic(err.Error())
}
}
You can also find the example of the documentation here.
Note: There might be more efficient ways to copy database from psql to mysql but this answer only gives a quick solution for this particular issue that you are having.

Should we also close DB's .Prepare() in Golang?

From this tutorial shown that rows.Closed() must be called where rows is from stmt.Query(), is stmt.Closed() also should be called where stmt is from db.Prepare()?
// inside a function
stmt, err := db.Prepare(cmd) // cmd is SQL string
Check(err)
// should we add: defer stmt.Close()
rows, err := stmt.Query(params) // params is map/interface{}
defer rows.Close()
Check(err)
The short answer is Yes. You should call stmt.Close();
The long answer can be found in this google groups thread.
Use as follows
// inside a function
stmt, err := db.Prepare(cmd) // cmd is SQL string
if err != nil {
println(err.Error())
}
defer stmt.Close()
rows, err := stmt.Query(params) // params is map/interface{}
if err != nil {
println(err.Error())
}