json.RawMessage from database/sql json column getting overwritten - sql

Getting strange behaviour with a struct with embedded json.
package main
import (
"database/sql"
"encoding/json"
"fmt"
_ "github.com/lib/pq"
)
type Article struct {
Id int
Doc *json.RawMessage
}
func main() {
db, err := sql.Open("postgres", "postgres://localhost/json_test?sslmode=disable")
if err != nil {
panic(err)
}
_, err = db.Query(`create table if not exists articles (id serial primary key, doc json)`)
if err != nil {
panic(err)
}
_, err = db.Query(`truncate articles`)
if err != nil {
panic(err)
}
docs := []string{
`{"type":"event1"}`,
`{"type":"event2"}`,
}
for _, doc := range docs {
_, err = db.Query(`insert into articles ("doc") values ($1)`, doc)
if err != nil {
panic(err)
}
}
rows, err := db.Query(`select id, doc from articles`)
if err != nil {
panic(err)
}
articles := make([]Article, 0)
for rows.Next() {
var a Article
err := rows.Scan(
&a.Id,
&a.Doc,
)
if err != nil {
panic(err)
}
articles = append(articles, a)
fmt.Println("scan", string(*a.Doc), len(*a.Doc))
}
fmt.Println()
for _, a := range articles {
fmt.Println("loop", string(*a.Doc), len(*a.Doc))
}
}
Output:
scan {"type":"event1"} 17
scan {"type":"event2"} 17
loop {"type":"event2"} 17
loop {"type":"event2"} 17
So the articles end up pointing to the same json.
Am I doing something wrong?
UPDATE
Edited to a runnable example. I'm using Postgres and lib/pq.

I ran into this same issue and after looking at if for a long time I read the doc on Scan and it says
If an argument has type *[]byte, Scan saves in that argument a copy of the corresponding data. The copy is owned by the caller and can be modified and held indefinitely. The copy can be avoided by using an argument of type *RawBytes instead; see the documentation for RawBytes for restrictions on its use.
What I think is happening if you use *json.RawMessage then Scan does not see it as a *[]byte and does not copy into it. So you get in internal slice on the next loop Scan overwrites.
Change your Scan to cast the *json.RawMessage to a *[]byte so Scan will copy the values to it.
err := rows.Scan(
&a.Id,
(*[]byte)(a.Doc),
)

In case that helps anyone :
I used masebase anwser to INSERT a json.RawMessage property of my struct in a postgresql db column having jsonb column type.
All you need to do is cast : ([]byte)(a.Doc) in the insert binding method (without the * in my case).

Related

How do I create a structure for a dynamic SQL query?

In my Golang application I make SQL request to the database. Usually, in the SQL query, I specify the columns that I want to get from the table and create a structure based on it. You can see an example of the working code below.
QUESTION:
What should I do if I don't know the number and name of columns in the table? For example, I make the SQL request like SELECT * from filters; instead of SELECT FILTER_ID, FILTER_NAME FROM filters;. How do I create a structure in this case?
var GetFilters = func(responseWriter http.ResponseWriter, request *http.Request) {
rows, err := database.ClickHouse.Query("SELECT * FROM filters;"); if err != nil {
fmt.Println(err)
return
}
defer rows.Close()
columns, err := rows.Columns(); if err != nil {
fmt.Println(err)
return
}
filters := make([]interface{}, len(columns))
for i, _ := range columns {
filters[i] = new(sql.RawBytes)
}
for rows.Next() {
if err = rows.Scan(filters...); err != nil {
fmt.Println(err)
return
}
}
utils.Response(responseWriter, http.StatusOK, filters)
}
Well, finally I found the solution. As you can see from the code below first I make SQL request where I do not specify the name of the columns. Then I take information about columns by ColumnTypes() function. This function returns column information such as column type, length and nullable. Next I will learn the name and type of columns, fill interface with these data:
for i, column := range columns {
object[column.Name()] = reflect.New(column.ScanType()).Interface()
values[i] = object[column.Name()]
}
The full code which I use looks like this:
var GetFilters = func(responseWriter http.ResponseWriter, request *http.Request) {
rows, err := database.ClickHouse.Query("SELECT * FROM table_name;"); if err != nil {
fmt.Println(err)
return
}
defer rows.Close()
var objects []map[string]interface{}
for rows.Next() {
columns, err := rows.ColumnTypes(); if err != nil {
fmt.Println(err)
return
}
values := make([]interface{}, len(columns))
object := map[string]interface{}{}
for i, column := range columns {
object[column.Name()] = reflect.New(column.ScanType()).Interface()
values[i] = object[column.Name()]
}
if err = rows.Scan(values...); err != nil {
fmt.Println(err)
return
}
objects = append(objects, object)
}
utils.Response(responseWriter, http.StatusOK, objects)
}
Use the USER_TAB_COLUMNS table to get the list of columns in the executing table query store it an array or collection. later execute the query and Scan the columns that you already know from the previous Query.

How to optimize database connections

In my Go application I use crontab package to run Tracker function every minute. As you can notice from the code I call PostgreSQL function. To interact with the PostgreSQL database, I use the gorm package. Application worked several days without any problem but now I notice an error in logs: pq: sorry, too many clients already. I know that same questions was asked several times in StackOverflow before. For example in this post people advice to use Exec or Scan methods. In my case as you can see I use Exec method but anyway I have error. As far as I understand, each database request makes a separate connection and does not close it. I can't figure out what I'm doing wrong.
main.go:
package main
import (
"github.com/mileusna/crontab"
)
func main() {
database.ConnectPostgreSQL()
defer database.DisconnectPostgreSQL()
err = crontab.New().AddJob("* * * * *", controllers.Tracker); if err != nil {
utils.Logger().Fatal(err)
return
}
}
tracker.go:
package controllers
import (
"questionnaire/database"
"time"
)
var Tracker = func() {
err := database.DBGORM.Exec("CALL tracker($1)", time.Now().Format("2006-01-02 15:04:05")).Error; if err != nil {
utils.Logger().Println(err) // ERROR: pq: sorry, too many clients already
return
}
}
PostgreSQL.go:
package database
import (
"fmt"
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/postgres"
"github.com/joho/godotenv"
"questionnaire/utils"
)
var DBGORM *gorm.DB
func ConnectPostgreSQL() {
err := godotenv.Load(".env")
if err != nil {
utils.Logger().Println(err)
panic(err)
}
databaseUser := utils.CheckEnvironmentVariable("PostgreSQL_USER")
databasePassword := utils.CheckEnvironmentVariable("PostgreSQL_PASSWORD")
databaseHost := utils.CheckEnvironmentVariable("PostgreSQL_HOST")
databaseName := utils.CheckEnvironmentVariable("PostgreSQL_DATABASE_NAME")
databaseURL:= fmt.Sprintf("host=%s user=%s dbname=%s password=%s sslmode=disable", databaseHost, databaseUser, databaseName, databasePassword)
DBGORM, err = gorm.Open("postgres", databaseURL)
if err != nil {
utils.Logger().Println(err)
panic(err)
}
err = DBGORM.DB().Ping()
if err != nil {
utils.Logger().Println(err)
panic(err)
}
DBGORM.LogMode(true)
}
func DisconnectPostgreSQL() error {
return DBGORM.Close()
}

How to backup the one row from the sql database?

I want to make a backup sql data function in golang. I have written a sample code which will backup the sql database and a command for table too. But I don't know that how to dump the one row data from sql into given path?
For database:
package main
import (
"io/ioutil"
"log"
"os/exec"
)
func main() {
cmd := exec.Command("mysqldump", "-P3306", "-hhost", "-uuser", "-ppassword", "database_name")
stdout, err := cmd.StdoutPipe()
if err != nil {
log.Fatal(err)
}
if err := cmd.Start(); err != nil {
log.Fatal(err)
}
bytes, err := ioutil.ReadAll(stdout)
if err != nil {
log.Fatal(err)
}
err = ioutil.WriteFile("./out.sql", bytes, 0644)
if err != nil {
panic(err)
}
}
For table we can change the command like below:
cmd := exec.Command("mysqldump", "-P3306", "-hhost", "-uuser", "-ppassword", "database_name" "table_name")
For dump the row of a table what should I have to write please any suggestions with short code.
for example: dump that row where id is equals to 1

Bulk insert copy sql table with golang

For the context, I'm new to go and I'm creating a program that can copy tables from Oracle to MySQL.
I use database/sql go package, so I assume it can be used for migrating any kind of database.
To simplify my question I'm coping on the same MySQL database table name world.city to world.city_copy2.
with my following code, I ended up with the same last values in all the rows in the table :-(
do I somehow need to read through all the values inside the loop? what is the efficient way to do that?
package main
import (
"database/sql"
"fmt"
"strings"
_ "github.com/go-sql-driver/mysql"
)
const (
user = "user"
pass = "testPass"
server = "localhost"
)
func main() {
fmt.Print("test")
conStr := fmt.Sprintf("%s:%s#tcp(%s)/world", user, pass, server)
db, err := sql.Open("mysql", conStr)
if err != nil {
panic(err.Error())
}
defer db.Close()
err = db.Ping()
if err != nil {
panic(err.Error())
}
rows, err := db.Query("SELECT * FROM city")
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
columns, err := rows.Columns()
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
// Make a slice for the values
values := make([]sql.RawBytes, len(columns))
// rows.Scan wants '[]interface{}' as an argument, so we must copy the
// references into such a slice
scanArgs := make([]interface{}, len(values))
for i := range values {
scanArgs[i] = &values[i]
}
// that string will be generated according to len of columns
placeHolders := "( ?, ?, ?, ?, ? )"
// slice will contain all the values at the end
bulkValues := []interface{}{}
valueStrings := make([]string, 0)
for rows.Next() {
// get RawBytes from data
err = rows.Scan(scanArgs...)
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
valueStrings = append(valueStrings, placeHolders)
bulkValues = append(bulkValues, scanArgs...)
//
}
stmStr := fmt.Sprintf("INSERT INTO city_copy2 VALUES %s", strings.Join(valueStrings, ","))
_, err = db.Exec(stmStr, bulkValues...)
if err != nil {
panic(err.Error())
}
}
I have checked out the docs of the library, and it seems that the problem here is that bulkValues keeps the address of the pointer so when scanArgs changes, bulkValues also changes to latest value of that scanArgs.
You need to use the values variable to get the values like below:
func main() {
fmt.Print("test")
conStr := fmt.Sprintf("%s:%s#tcp(%s)/soverflow", user, pass, server)
db, err := sql.Open("mysql", conStr)
if err != nil {
panic(err.Error())
}
defer db.Close()
err = db.Ping()
if err != nil {
panic(err.Error())
}
rows, err := db.Query("SELECT * FROM city")
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
columns, err := rows.Columns()
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
// Make a slice for the values
values := make([]sql.RawBytes, len(columns))
// rows.Scan wants '[]interface{}' as an argument, so we must copy the
// references into such a slice
scanArgs := make([]interface{}, len(values))
for i := range values {
scanArgs[i] = &values[i]
}
// that string will be generated according to len of columns
placeHolders := "( ?, ?, ?, ?, ? )"
// slice will contain all the values at the end
bulkValues := []interface{}{}
valueStrings := make([]string, 0)
// make an interface to keep the record's value
record := make([]interface{}, len(columns))
for rows.Next() {
// get RawBytes from data
err = rows.Scan(scanArgs...)
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
valueStrings = append(valueStrings, placeHolders)
for i, col := range values {
// you need to be carefull with the datatypes here
// check out the docs for details on here
record[i] = string(value)
}
bulkValues = append(bulkValues, record...)
}
stmStr := fmt.Sprintf("INSERT INTO city_copy2 VALUES %s", strings.Join(valueStrings, ","))
_, err = db.Exec(stmStr, bulkValues...)
if err != nil {
panic(err.Error())
}
}
You can also find the example of the documentation here.
Note: There might be more efficient ways to copy database from psql to mysql but this answer only gives a quick solution for this particular issue that you are having.

Load and Store in Go Language [duplicate]

This question already has answers here:
Efficient Go serialization of struct to disk
(3 answers)
Closed 6 years ago.
I am new to Go and was wondering if there is a way to load and store precomputed variables in Go like pickle in Python.
My code is creating a map and an array from some data and I don't want to spend time in computation of those, every time the code runs.
I want to load that map and array directly next time I run the code.
Can someone help me with this?
TIA :)
I don't know about how pickle work, if you want to dump a struct into file, may be you can use gob package, see more detail for this How do I dump the struct into the byte array without reflection?。
Also, I found a package that can read and write Python's pickle https://github.com/hydrogen18/stalecucumber.
Calculate your variables once and save them all once in a file,
then open that file and load them all.
When there is no file to open, it is the first time, so calculate and save it once.
You may use your own file format, if you like, or use standard library, like "encoding/json", "encoding/gob", "encoding/csv", "encoding/xml", ....
This:
data := calcOnce()
reads file:
rd, err := ioutil.ReadFile(once)
and if there is no error loads all the variables, otherwise calculates and saves them once.
Here's the working code:
1- Using "encoding/json", try it on The Go Playground:
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
)
type Data struct {
A [2]int
B map[int]string
}
func main() {
data := calcOnce()
fmt.Println(data) // {[101 102] map[1:Hello 2:World.]}
}
func calcOnce() Data {
const once = "date.json"
rd, err := ioutil.ReadFile(once)
if err != nil {
//calc and save once:
data := Data{[2]int{101, 102}, map[int]string{1: "Hello ", 2: "World."}}
buf, err := json.Marshal(data)
if err != nil {
panic(err)
}
//fmt.Println(string(buf))
err = ioutil.WriteFile(once, buf, 0666)
if err != nil {
panic(err)
}
return data
}
var d *Data
err = json.Unmarshal(rd, &d)
if err != nil {
panic(err)
}
return *d
}
2- Using "encoding/gob", try it on The Go Playground:
package main
import (
"bytes"
"encoding/gob"
"fmt"
"io/ioutil"
)
type Data struct {
A [2]int
B map[int]string
}
func main() {
data := calcOnce()
fmt.Println(data) // {[1010 102] map[2:World. 1:Hello ]}
}
func calcOnce() Data {
const once = "date.bin"
rd, err := ioutil.ReadFile(once)
if err != nil {
//calc and save once:
data := Data{[2]int{101, 102}, map[int]string{1: "Hello ", 2: "World."}}
buf := &bytes.Buffer{}
err = gob.NewEncoder(buf).Encode(data)
if err != nil {
panic(err)
}
err = ioutil.WriteFile(once, buf.Bytes(), 0666)
if err != nil {
panic(err)
}
return data
}
var d Data
err = gob.NewDecoder(bytes.NewReader(rd)).Decode(&d)
if err != nil {
panic(err)
}
return d
}
3- for protobuf see: Efficient Go serialization of struct to disk
Perhaps the gob package is closest:
https://golang.org/pkg/encoding/gob/