Selecting an int array from Postgres into struct and then marshalling it - sql

I have the following struct:
type Payment struct {
...
PaymentMethods []int64 `json:"paymentMethods,omitempty" db:"payment_methods"`
}
In Postgres database, I have a simple []int column named payment_methods in payments table.
I need to make a simple Select query to get that struct and then marshall it to return it to the REST API in json format.
However, when I run the following:
payment := Payment{}
err := psql.db.Unsafe().Get(&payment, "select * from payments where id = $1", id)
I get an error :
sql: Scan error on column index 12, name "payment_methods": unsupported Scan, storing driver.Value t
Now, I know that I can use the pq.Int64Array type instead of []int.
However, the marshalling part won't work for that. I want to find a simpler solution that wouldn't add unnecessary overhead (unless it's impossible of course).
The marshalled struct then is encoded in this part:
func (handler *Handler) respond(w http.ResponseWriter, r *http.Request, data interface{}, status int) {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(status)
if data != nil {
err := json.NewEncoder(w).Encode(data)
if err != nil {
errors.Wrap(err, "Response Error while encoding data to json")
http.Error(w, http.StatusText(http.StatusNotFound), http.StatusNotFound)
}
}
}
How can I handle it in a generic way?
I'm using sqlx with pgx driver.

Related

Go SQL query inconsistency

I am experiencing some really weird inconsistencies when executing queries, and was wondering if anyone knew why.
Imagine I have a struct defined as follows:
type Result struct {
Afield string `db:"A"`
Bfield interface{} `db:"B"`
Cfield string `db:"C"`
Dfield string `db:"D"`
}
And a MySQL Table with the following cols:
A : VARCHAR(50)
B : INT
C : VARCHAR(50)
D : VARCHAR(50)
The query I would like to execute:
SELECT A, B, C, D FROM table WHERE A="a"
first way it can be executed:
db.Get(&result, `SELECT A, B, C, D FROM table WHERE A="a"`)
second way it can be executed:
db.Get(&result, `SELECT A, B, C, D FROM table WHERE A=?`, "a")
The inconsistencies I am experiencing are as follows: When executing the query the first way, the type of Bfield is int. However, when executing the query the second time, it is []uint8.
This outcome is occurring for example when B is 1.
Why is the type of Bfield different depending on how the query is executed?
connection declaration:
// Connection is an interface for making queries.
type Connection interface {
Exec(query string, args ...interface{}) (sql.Result, error)
Get(dest interface{}, query string, args ...interface{}) error
Select(dest interface{}, query string, args ...interface{}) error
}
EDIT
This is also happening using the Go database/sql package + driver. The queries below are assigning Bfield to []uint8 and int64 respectively.
db is of type *sql.DB
query 1:
db.QueryRow(SELECT A, B, C, D FROM table WHERE A="a").Scan(&result.Afield, &result.Bfield, &result.Cfield, &result.Dfield)
-- > type of Bfield is []uint8
query 2:
db.QueryRow(SELECT A, B, C, D FROM table WHERE A=?, "a").Scan(&result.Afield, &result.Bfield, &result.Cfield, &result.Dfield)
--> type of Bfield is int64
EDIT
Something else to note, when chaining multiple WHERE clauses, as long as at least 1 is populated using ?, the query will return int. Otherwise if they are all populated in the string, it will return []uint8
Short answer: because the MySQL driver uses a different protocol for queries with and without parameters. Use a prepared statement to get consistent results.
The following explanation refers to the standard MySQL driver github.com/go-sql-driver/mysql, version 1.4
In the first case, the driver sends the query directly to MySQL, and interprets the result as a *textRows struct. This struct (almost) always decodes results into a byte slice, and leaves the conversion to a better type to the Go sql package. This works fine if the destination is an int, string, sql.Scanner etc, but not for interface{}.
In the second case, the driver detects that there are arguments and returns driver.ErrSkip. This causes the Go SQL package to use a PreparedStatement. And in that case, the MySQL driver uses a *binaryRows struct to interpret the results. This struct uses the declared column type (INT in this case) to decode the value, in this case to decode the value into an int64.
Fun fact: if you provide the interpolateParams=true parameter to the database DSN (e.g. "root:testing#/mysql?interpolateParams=true"), the MySQL driver will prepare the query on the client side, and not use a PreparedStatement. At this point both types of query behave the same.
A small proof of concept:
package main
import (
"database/sql"
"log"
_ "github.com/go-sql-driver/mysql"
)
type Result struct {
Afield string
Bfield interface{}
}
func main() {
db, err := sql.Open("mysql", "root:testing#/mysql")
if err != nil {
log.Fatal(err)
}
defer db.Close()
if _, err = db.Exec(`CREATE TABLE IF NOT EXISTS mytable(A VARCHAR(50), B INT);`); err != nil {
log.Fatal(err)
}
if _, err = db.Exec(`DELETE FROM mytable`); err != nil {
log.Fatal(err)
}
if _, err = db.Exec(`INSERT INTO mytable(A, B) VALUES ('a', 3)`); err != nil {
log.Fatal(err)
}
var (
usingLiteral Result
usingParam Result
usingLiteralPrepared Result
)
row := db.QueryRow(`SELECT B FROM mytable WHERE A='a'`)
if err := row.Scan(&usingLiteral.Bfield); err != nil {
log.Fatal(err)
}
row = db.QueryRow(`SELECT B FROM mytable WHERE A=?`, "a")
if err := row.Scan(&usingParam.Bfield); err != nil {
log.Fatal(err)
}
stmt, err := db.Prepare(`SELECT B FROM mytable WHERE A='a'`)
if err != nil {
log.Fatal(err)
}
defer stmt.Close()
row = stmt.QueryRow()
if err := row.Scan(&usingLiteralPrepared.Bfield); err != nil {
log.Fatal(err)
}
log.Printf("Type when using literal: %T", usingLiteral.Bfield) // []uint8
log.Printf("Type when using param: %T", usingParam.Bfield) // int64
log.Printf("Type when using prepared: %T", usingLiteralPrepared.Bfield) // int64
}
Your first SQL string, in MySql is ambigous and can have too meaning as explained on StackOverflow in following address
When to use single quotes, double quotes, and back ticks in MySQL
Depending on SQL-MODE, your SQL command can be interpreted as
SELECT A, B, C, D FROM table WHERE A='a'
that is what I think you are expecting.
or as
SELECT A, B, C, D FROM table WHERE A=`a`
To avoid this ambiguity, can you make a new FIRST test in replacing double quotes by single quote ?
If the same behavior continue to be there, my answer is not a good response.
If BOTH SQL select return same value, your question has been solved.
Using ` character, you pass a variable name and not a string value !

Golang database manager api concept, error with type assertion

The base concept creating a Database Manager API for getting data through an API. I am using the GORM for getting data of the instances of the strcuts. So there is 300-400 struct which represents the tables.
type Users struct {
ID int64
Name string
}
type Categories struct {
ID int64
Category string
}
The next step I implement a function which is return the correct instance of the struct by table name, what I get through the API endpoint param.
func GetModel(model string) interface{} {
switch model {
case "users":
return Users{}
case "categories"
return Categories{}
}
return false
}
After there is an operations struct where the only one field is the DB. There is methods, for example the GetLast() where I want to use the GORM db.Last(&users) function.
func (o Operations) GetLast(model string) interface{} {
modelStruct := GetModel(model)
.
.
.
return o.DB.Last(&modelStruct)
}
There is points so this is what I don't know. The current solution is not working because in this case it is an interface{} I need make a type assertion more info in this question. The type assertion is looks like:
func (o Operations) GetLast(model string) interface{} {
modelStruct := GetModel(model)
.
test := modelStruct.(Users)
.
return o.DB.Last(&test)
}
This solution working, but in this case I lost the modularity. I try using the reflect.TypeOf(modelStruct), but it is also not working because the result of the reflect.TypeOf is a reflect.Type, with is not a golang type.
Basically I solved the problem, for getting the model as a pointer, and after I return it back as a json file.
So my model is the following:
var Models = map[string]interface{}{
"users": new(Users),
"categories": new(Categories),
}
And it is return back a new model by table type. what I can use for gorm First() function. Then json Marshal it, and return.
func (o Operation) First(model string, query url.Values) string {
modelStruct := Models[model]
db := o.DB
db.First(modelStruct)
response, _ := json.Marshal(modelStruct)
clear(modelStruct)
return string(response)
}
Before the return I clear the model pointer because the First() function store callbacks for the latest queries.
func clear(v interface{}) {
p := reflect.ValueOf(v).Elem()
p.Set(reflect.Zero(p.Type()))
}

Accessing Data From Interfaces in Go

I am trying to implement a simple api in Golang. My experience in the backend is more with python and node, so I am having some difficulty printing out data held within the interface since it won't allow me to index it. I have searched around and several people have asked similar questions when the interface is one value, but not when the interface is a slice, I believe ([]interface{}). I have tried vaping the interface to no avail.
When I point the browser to /quandl/ddd/10 I would like to fmt.Println the specific numerical data, i.e. ("2017-01-13",
15.67,
16.41,
15.67,
16.11,
3595248,
0,
1,
15.67,
16.41,
15.67,
16.11,
3595248
])
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"github.com/fatih/color"
"github.com/gorilla/mux"
)
type QuandlResponse struct {
SourceCode string `json:"source_code"`
SourceName string `json:"source_name"`
Code string `json:"code"`
Frequency string `json:"frequency"`
FromDate string `json:"from_date"`
ToDate string `json:"to_date"`
Columns []string `json:"column_names"`
Data interface{} `json:"data"`
}
func getContent(w http.ResponseWriter, r *http.Request) {
stock := mux.Vars(r)["stock"]
limit := mux.Vars(r)["limit"]
url := "https://www.quandl.com/api/v1/datasets/WIKI/" + url.QueryEscape(stock) + ".json?&limit=" + url.QueryEscape(limit) + "&auth_token=XXXXX"
response, err := http.Get(url)
if err != nil {
fmt.Println(err)
}
contents, err := ioutil.ReadAll(response.Body)
var result QuandlResponse
json.Unmarshal(contents, &result)
json.NewEncoder(w).Encode(result)
fmt.Println(result.Data[0])
}
func callAll() {
rabbit := mux.NewRouter()
rabbit.HandleFunc("/quandl/{stock}/{limit}", getContent)
http.ListenAndServe(":8000", rabbit)
}
func main() {
color.Blue("Running Server #localhost:8000")
callAll()
}
If you know that the type of Data is []interface{}, you can do a type assertion:
slice := result.Data.([]interface{})
fmt.Println(slice[0])
If there are several possibilities for the type of Data, you can use a type switch:
switch data := result.Data.(type) {
case []interface{}:
fmt.Println(data[0])
case string:
fmt.Println(data)
default:
// unexpected type
}
You may also want to look at the reflect package if your requirements are more complicated.

Get back newly inserted row in Postgres with sqlx

I use https://github.com/jmoiron/sqlx to make queries to Postgres.
Is it possible to get back the whole row data when inserting a new row?
Here is the query I run:
result, err := Db.Exec("INSERT INTO users (name) VALUES ($1)", user.Name)
Or should I just use my existing user struct as the source of truth about the new entry in the database?
Here is docs about transaction of sqlx:
The result has two possible pieces of data: LastInsertId() or RowsAffected(), the availability of which is driver dependent. In MySQL, for instance, LastInsertId() will be available on inserts with an auto-increment key, but in PostgreSQL, this information can only be retrieved from a normal row cursor by using the RETURNING clause.
So I made a complete demo for how to execute transaction using sqlx, the demo will create an address row in addresses table and then create a user in users table using the new address_id PK as user_address_id FK of the user.
package transaction
import (
"database/sql"
"github.com/jmoiron/sqlx"
"log"
"github.com/pkg/errors"
)
import (
"github.com/icrowley/fake"
)
type User struct {
UserID int `db:"user_id"`
UserNme string `db:"user_nme"`
UserEmail string `db:"user_email"`
UserAddressId sql.NullInt64 `db:"user_address_id"`
}
type ITransactionSamples interface {
CreateUserTransaction() (*User, error)
}
type TransactionSamples struct {
Db *sqlx.DB
}
func NewTransactionSamples(Db *sqlx.DB) ITransactionSamples {
return &TransactionSamples{Db}
}
func (ts *TransactionSamples) CreateUserTransaction() (*User, error) {
tx := ts.Db.MustBegin()
var lastInsertId int
err := tx.QueryRowx(`INSERT INTO addresses (address_id, address_city, address_country, address_state) VALUES ($1, $2, $3, $4) RETURNING address_id`, 3, fake.City(), fake.Country(), fake.State()).Scan(&lastInsertId)
if err != nil {
tx.Rollback()
return nil, errors.Wrap(err, "insert address error")
}
log.Println("lastInsertId: ", lastInsertId)
var user User
err = tx.QueryRowx(`INSERT INTO users (user_id, user_nme, user_email, user_address_id) VALUES ($1, $2, $3, $4) RETURNING *;`, 6, fake.UserName(), fake.EmailAddress(), lastInsertId).StructScan(&user)
if err != nil {
tx.Rollback()
return nil, errors.Wrap(err, "insert user error")
}
err = tx.Commit()
if err != nil {
return nil, errors.Wrap(err, "tx.Commit()")
}
return &user, nil
}
Here is test result:
☁ transaction [master] ⚡ go test -v -count 1 ./...
=== RUN TestCreateUserTransaction
2019/06/27 16:38:50 lastInsertId: 3
--- PASS: TestCreateUserTransaction (0.01s)
transaction_test.go:28: &transaction.User{UserID:6, UserNme:"corrupti", UserEmail:"reiciendis_quam#Thoughtstorm.mil", UserAddressId:sql.NullInt64{Int64:3, Valid:true}}
PASS
ok sqlx-samples/transaction 3.254s
This is a sample code that works with named queries and strong type structures for inserted data and ID.
Query and struct included to cover used syntax.
const query = `INSERT INTO checks (
start, status) VALUES (
:start, :status)
returning id;`
type Row struct {
Status string `db:"status"`
Start time.Time `db:"start"`
}
func InsertCheck(ctx context.Context, row Row, tx *sqlx.Tx) (int64, error) {
return insert(ctx, row, insertCheck, "checks", tx)
}
// insert inserts row into table using query SQL command
// table used only for loging, actual table name defined in query
// should not be used from services directly - implement strong type wrappers
// function expects query with named parameters
func insert(ctx context.Context, row interface{}, query string, table string, tx *sqlx.Tx) (int64, error) {
// convert named query to native parameters format
query, args, err := tx.BindNamed(query, row)
if err != nil {
return 0, fmt.Errorf("cannot bind parameters for insert into %q: %w", table, err)
}
var id struct {
Val int64 `db:"id"`
}
err = sqlx.GetContext(ctx, tx, &id, query, args...)
if err != nil {
return 0, fmt.Errorf("cannot insert into %q: %w", table, err)
}
return id.Val, nil
}
PostgreSQL supports RETURNING syntax for INSERT statements.
Example:
INSERT INTO users(...) VALUES(...) RETURNING id, name, foo, bar
Documentaion: https://www.postgresql.org/docs/9.6/static/sql-insert.html
The optional RETURNING clause causes INSERT to compute and return value(s) based on each row actually inserted (or updated, if an ON CONFLICT DO UPDATE clause was used). This is primarily useful for obtaining values that were supplied by defaults, such as a serial sequence number. However, any expression using the table's columns is allowed. The syntax of the RETURNING list is identical to that of the output list of SELECT. Only rows that were successfully inserted or updated will be returned.

How can I work with SQL NULL values and JSON in a good way?

Go types like Int64 and String cannot store null values,
so I found I could use sql.NullInt64 and sql.NullString for this.
But when I use these in a Struct,
and generate JSON from the Struct with the json package,
then the format is different to when I use regular Int64 and String types.
The JSON has an additional level because the sql.Null*** is also a Struct.
Is there a good workaround for this,
or should I not use NULLs in my SQL database?
Types like sql.NullInt64 do not implement any special handling for JSON marshaling or unmarshaling, so the default rules apply. Since the type is a struct, it gets marshalled as an object with its fields as attributes.
One way to work around this is to create your own type that implements the json.Marshaller / json.Unmarshaler interfaces. By embedding the sql.NullInt64 type, we get the SQL methods for free. Something like this:
type JsonNullInt64 struct {
sql.NullInt64
}
func (v JsonNullInt64) MarshalJSON() ([]byte, error) {
if v.Valid {
return json.Marshal(v.Int64)
} else {
return json.Marshal(nil)
}
}
func (v *JsonNullInt64) UnmarshalJSON(data []byte) error {
// Unmarshalling into a pointer will let us detect null
var x *int64
if err := json.Unmarshal(data, &x); err != nil {
return err
}
if x != nil {
v.Valid = true
v.Int64 = *x
} else {
v.Valid = false
}
return nil
}
If you use this type in place of sql.NullInt64, it should be encoded as you expect.
You can test this example here: http://play.golang.org/p/zFESxLcd-c
If you use the null.v3 package, you won't need to implement any of the marshal or unmarshal methods. It's a superset of the sql.Null structs and is probably what you want.
package main
import "gopkg.in/guregu/null.v3"
type Person struct {
Name string `json:"id"`
Age int `json:"age"`
NickName null.String `json:"nickname"` // Optional
}
If you'd like to see a full Golang webserver that uses sqlite, nulls, and json you can consult this gist.