Fetch rows like python-style - sql

In python it's a simple db.query("SELECT id,login,password FROM Users") and return list [(1,'root','password'), (2,'toor','password')]. I can simple iterate it
for user in response:print("id: %s, login: %s, password: %s", %(user[0],user[1],user[2]))
But in Golang I can't find revelant example for simple way to do this.
I understand that a python has dynamic typing, golang is static. So I'm looking for an answer, maybe some libraries provide such functionality? Hacks? Thanks for answers!

You need something like this, but there may be a problem if you are using complex mysql's data types like enum, set and etc.
var (
result [][]string
container []string
pointers []interface{}
)
rows, err := db.Query("SELECT id, login, password FROM Users")
if err != nil {
panic(err.Error())
}
cols, err := rows.Columns()
if err != nil {
panic(err.Error())
}
length := len(cols)
for rows.Next() {
pointers = make([]interface{}, length)
container = make([]string, length)
for i := range pointers {
pointers[i] = &container[i]
}
err = rows.Scan(pointers...)
if err != nil {
panic(err.Error())
}
result = append(result, container)
}
Then you can loop over results and print it using fmt package.

An awesome library that nicely extends the go standard lib sql interface is sqlx
Using this lib, you can do something similar to your example in python:
db, err := sqlx.Connect("postgres", "user=foo dbname=bar sslmode=disable")
if err != nil {
log.Fatalln(err)
}
users := []User{}
db.Select(&users, "SELECT id,login,password FROM Users")
//iterate through the users slice

Related

How do I create a structure for a dynamic SQL query?

In my Golang application I make SQL request to the database. Usually, in the SQL query, I specify the columns that I want to get from the table and create a structure based on it. You can see an example of the working code below.
QUESTION:
What should I do if I don't know the number and name of columns in the table? For example, I make the SQL request like SELECT * from filters; instead of SELECT FILTER_ID, FILTER_NAME FROM filters;. How do I create a structure in this case?
var GetFilters = func(responseWriter http.ResponseWriter, request *http.Request) {
rows, err := database.ClickHouse.Query("SELECT * FROM filters;"); if err != nil {
fmt.Println(err)
return
}
defer rows.Close()
columns, err := rows.Columns(); if err != nil {
fmt.Println(err)
return
}
filters := make([]interface{}, len(columns))
for i, _ := range columns {
filters[i] = new(sql.RawBytes)
}
for rows.Next() {
if err = rows.Scan(filters...); err != nil {
fmt.Println(err)
return
}
}
utils.Response(responseWriter, http.StatusOK, filters)
}
Well, finally I found the solution. As you can see from the code below first I make SQL request where I do not specify the name of the columns. Then I take information about columns by ColumnTypes() function. This function returns column information such as column type, length and nullable. Next I will learn the name and type of columns, fill interface with these data:
for i, column := range columns {
object[column.Name()] = reflect.New(column.ScanType()).Interface()
values[i] = object[column.Name()]
}
The full code which I use looks like this:
var GetFilters = func(responseWriter http.ResponseWriter, request *http.Request) {
rows, err := database.ClickHouse.Query("SELECT * FROM table_name;"); if err != nil {
fmt.Println(err)
return
}
defer rows.Close()
var objects []map[string]interface{}
for rows.Next() {
columns, err := rows.ColumnTypes(); if err != nil {
fmt.Println(err)
return
}
values := make([]interface{}, len(columns))
object := map[string]interface{}{}
for i, column := range columns {
object[column.Name()] = reflect.New(column.ScanType()).Interface()
values[i] = object[column.Name()]
}
if err = rows.Scan(values...); err != nil {
fmt.Println(err)
return
}
objects = append(objects, object)
}
utils.Response(responseWriter, http.StatusOK, objects)
}
Use the USER_TAB_COLUMNS table to get the list of columns in the executing table query store it an array or collection. later execute the query and Scan the columns that you already know from the previous Query.

Bulk insert copy sql table with golang

For the context, I'm new to go and I'm creating a program that can copy tables from Oracle to MySQL.
I use database/sql go package, so I assume it can be used for migrating any kind of database.
To simplify my question I'm coping on the same MySQL database table name world.city to world.city_copy2.
with my following code, I ended up with the same last values in all the rows in the table :-(
do I somehow need to read through all the values inside the loop? what is the efficient way to do that?
package main
import (
"database/sql"
"fmt"
"strings"
_ "github.com/go-sql-driver/mysql"
)
const (
user = "user"
pass = "testPass"
server = "localhost"
)
func main() {
fmt.Print("test")
conStr := fmt.Sprintf("%s:%s#tcp(%s)/world", user, pass, server)
db, err := sql.Open("mysql", conStr)
if err != nil {
panic(err.Error())
}
defer db.Close()
err = db.Ping()
if err != nil {
panic(err.Error())
}
rows, err := db.Query("SELECT * FROM city")
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
columns, err := rows.Columns()
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
// Make a slice for the values
values := make([]sql.RawBytes, len(columns))
// rows.Scan wants '[]interface{}' as an argument, so we must copy the
// references into such a slice
scanArgs := make([]interface{}, len(values))
for i := range values {
scanArgs[i] = &values[i]
}
// that string will be generated according to len of columns
placeHolders := "( ?, ?, ?, ?, ? )"
// slice will contain all the values at the end
bulkValues := []interface{}{}
valueStrings := make([]string, 0)
for rows.Next() {
// get RawBytes from data
err = rows.Scan(scanArgs...)
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
valueStrings = append(valueStrings, placeHolders)
bulkValues = append(bulkValues, scanArgs...)
//
}
stmStr := fmt.Sprintf("INSERT INTO city_copy2 VALUES %s", strings.Join(valueStrings, ","))
_, err = db.Exec(stmStr, bulkValues...)
if err != nil {
panic(err.Error())
}
}
I have checked out the docs of the library, and it seems that the problem here is that bulkValues keeps the address of the pointer so when scanArgs changes, bulkValues also changes to latest value of that scanArgs.
You need to use the values variable to get the values like below:
func main() {
fmt.Print("test")
conStr := fmt.Sprintf("%s:%s#tcp(%s)/soverflow", user, pass, server)
db, err := sql.Open("mysql", conStr)
if err != nil {
panic(err.Error())
}
defer db.Close()
err = db.Ping()
if err != nil {
panic(err.Error())
}
rows, err := db.Query("SELECT * FROM city")
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
columns, err := rows.Columns()
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
// Make a slice for the values
values := make([]sql.RawBytes, len(columns))
// rows.Scan wants '[]interface{}' as an argument, so we must copy the
// references into such a slice
scanArgs := make([]interface{}, len(values))
for i := range values {
scanArgs[i] = &values[i]
}
// that string will be generated according to len of columns
placeHolders := "( ?, ?, ?, ?, ? )"
// slice will contain all the values at the end
bulkValues := []interface{}{}
valueStrings := make([]string, 0)
// make an interface to keep the record's value
record := make([]interface{}, len(columns))
for rows.Next() {
// get RawBytes from data
err = rows.Scan(scanArgs...)
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
valueStrings = append(valueStrings, placeHolders)
for i, col := range values {
// you need to be carefull with the datatypes here
// check out the docs for details on here
record[i] = string(value)
}
bulkValues = append(bulkValues, record...)
}
stmStr := fmt.Sprintf("INSERT INTO city_copy2 VALUES %s", strings.Join(valueStrings, ","))
_, err = db.Exec(stmStr, bulkValues...)
if err != nil {
panic(err.Error())
}
}
You can also find the example of the documentation here.
Note: There might be more efficient ways to copy database from psql to mysql but this answer only gives a quick solution for this particular issue that you are having.

Should we also close DB's .Prepare() in Golang?

From this tutorial shown that rows.Closed() must be called where rows is from stmt.Query(), is stmt.Closed() also should be called where stmt is from db.Prepare()?
// inside a function
stmt, err := db.Prepare(cmd) // cmd is SQL string
Check(err)
// should we add: defer stmt.Close()
rows, err := stmt.Query(params) // params is map/interface{}
defer rows.Close()
Check(err)
The short answer is Yes. You should call stmt.Close();
The long answer can be found in this google groups thread.
Use as follows
// inside a function
stmt, err := db.Prepare(cmd) // cmd is SQL string
if err != nil {
println(err.Error())
}
defer stmt.Close()
rows, err := stmt.Query(params) // params is map/interface{}
if err != nil {
println(err.Error())
}

json.RawMessage from database/sql json column getting overwritten

Getting strange behaviour with a struct with embedded json.
package main
import (
"database/sql"
"encoding/json"
"fmt"
_ "github.com/lib/pq"
)
type Article struct {
Id int
Doc *json.RawMessage
}
func main() {
db, err := sql.Open("postgres", "postgres://localhost/json_test?sslmode=disable")
if err != nil {
panic(err)
}
_, err = db.Query(`create table if not exists articles (id serial primary key, doc json)`)
if err != nil {
panic(err)
}
_, err = db.Query(`truncate articles`)
if err != nil {
panic(err)
}
docs := []string{
`{"type":"event1"}`,
`{"type":"event2"}`,
}
for _, doc := range docs {
_, err = db.Query(`insert into articles ("doc") values ($1)`, doc)
if err != nil {
panic(err)
}
}
rows, err := db.Query(`select id, doc from articles`)
if err != nil {
panic(err)
}
articles := make([]Article, 0)
for rows.Next() {
var a Article
err := rows.Scan(
&a.Id,
&a.Doc,
)
if err != nil {
panic(err)
}
articles = append(articles, a)
fmt.Println("scan", string(*a.Doc), len(*a.Doc))
}
fmt.Println()
for _, a := range articles {
fmt.Println("loop", string(*a.Doc), len(*a.Doc))
}
}
Output:
scan {"type":"event1"} 17
scan {"type":"event2"} 17
loop {"type":"event2"} 17
loop {"type":"event2"} 17
So the articles end up pointing to the same json.
Am I doing something wrong?
UPDATE
Edited to a runnable example. I'm using Postgres and lib/pq.
I ran into this same issue and after looking at if for a long time I read the doc on Scan and it says
If an argument has type *[]byte, Scan saves in that argument a copy of the corresponding data. The copy is owned by the caller and can be modified and held indefinitely. The copy can be avoided by using an argument of type *RawBytes instead; see the documentation for RawBytes for restrictions on its use.
What I think is happening if you use *json.RawMessage then Scan does not see it as a *[]byte and does not copy into it. So you get in internal slice on the next loop Scan overwrites.
Change your Scan to cast the *json.RawMessage to a *[]byte so Scan will copy the values to it.
err := rows.Scan(
&a.Id,
(*[]byte)(a.Doc),
)
In case that helps anyone :
I used masebase anwser to INSERT a json.RawMessage property of my struct in a postgresql db column having jsonb column type.
All you need to do is cast : ([]byte)(a.Doc) in the insert binding method (without the * in my case).

Can I get EXPLAIN ANALYZE output through the lib/pq Go SQL driver?

I'd like to be able to evaluate my queries inside my app, which is in Go and using the github.com/lib/pq driver. Unfortunately, neither the [lib/pq docs][1] nor the [database/sql][2] docs seem to say anything about this, and nothing in the database/sql interfaces suggests this is possible.
Has anyone found a way to get this output?
Typical EXPLAIN ANALYZE returns several rows, so you can do it with simple sql.Query. Here is an example:
package main
import (
"database/sql"
"fmt"
_ "github.com/lib/pq"
"log"
)
func main() {
db, err := sql.Open("postgres", "user=test dbname=test sslmode=disable")
if err != nil {
log.Fatal(err)
}
defer db.Close()
rows, err := db.Query("EXPLAIN ANALYZE SELECT * FROM accounts ORDER BY slug")
if err != nil {
log.Fatal(err)
}
for rows.Next() {
var s string
if err := rows.Scan(&s); err != nil {
log.Fatal(err)
}
fmt.Println(s)
}
if err := rows.Err(); err != nil {
log.Fatal(err)
}
}