golang null.String decoding not working correctly - sql

Trying to fix this problem i'm having with my api im building.
db:
DROP TABLE IF EXISTS contacts CASCADE;
CREATE TABLE IF NOT EXISTS contacts (
uuid UUID UNIQUE PRIMARY KEY,
first_name varchar(150),
);
DROP TABLE IF EXISTS workorders CASCADE;
CREATE TABLE IF NOT EXISTS workorders (
uuid UUID UNIQUE PRIMARY KEY,
work_date timestamp WITH time zone,
requested_by UUID REFERENCES contacts (uuid) ON UPDATE CASCADE ON DELETE CASCADE,
);
struct:
https://gopkg.in/guregu/null.v3
type WorkorderNew struct {
UUID string `json:"uuid"`
WorkDate null.Time `json:"work_date"`
RequestedBy null.String `json:"requested_by"`
}
api code:
workorder := &models.WorkorderNew{}
if err := json.NewDecoder(r.Body).Decode(workorder); err != nil {
log.Println("decoding fail", err)
}
// fmt.Println(NewUUID())
u2, err := uuid.NewV4()
if err != nil {
log.Fatalf("failed to generate UUID: %v", err)
}
q := `
INSERT
INTO workorders
(uuid,
work_date,
requested_by
)
VALUES
($1,$2,$3)
RETURNING uuid;`
statement, err := global.DB.Prepare(q)
global.CheckDbErr(err)
fmt.Println("requested by", workorder.RequestedBy)
lastInsertID := ""
err = statement.QueryRow(
u2,
workorder.WorkDate,
workorder.RequestedBy,
).Scan(&lastInsertID)
global.CheckDbErr(err)
json.NewEncoder(w).Encode(lastInsertID)
When I send an API request with null as the value it works as expected
but when i try to send a "" as the value for the null.String or the null.Time it fails
works:
{
"work_date":"2016-12-16T19:00:00Z",
"requested_by":null
}
not working:
{
"work_date":"2016-12-16T19:00:00Z",
"requested_by":""
}
Basically when i call the QueryRow and save to database the workorder.RequestedBy value should be a null and not the "" value im getting
thanks

If you want to treat empty strings as nulls you have at least two options.
"Extend" null.String:
type MyNullString struct {
null.String
}
func (ns *MyNullString) UnmarshalJSON(data []byte) error {
if string(data) == `""` {
ns.Valid = false
return nil
}
ns.String.UnmarshalJSON(data)
}
Or use NULLIF in the query:
INSERT INTO workorders (
uuid
, work_date
, requested_by
) VALUES (
$1
, $2
, NULLIF($3, '')
)
RETURNING uuid
Update:
To extend the null.Time you have to understand that the type of null.Time.Time is a struct. The builtin len function works on slices, arrays, pointers to arrays, maps, channels, and strings. Not structs. So in this case you can check the data argument, which is a byte slice, by converting it to a string and comparing it to a string that contains an empty string, i.e. it has two double quotes and nothing else.
type MyNullTime struct {
null.Time
}
func (ns *MyNullTime) UnmarshalJSON(data []byte) error {
if string(data) == `""` {
ns.Valid = false
return nil
}
return ns.Time.UnmarshalJSON(data)
}

Related

Inserting empty string or null into postgres as null using jackc/pgx

I'm using an external json API that's inconsistent in the way it handles missing values. Sometimes json values show up as empty strings and other times as null. For example...
Case1: datedec and curr are both empty strings.
{
"symbol": "XYZ",
"dateex": "2020-09-01",
"datedec": "",
"amount": "1.25",
"curr": "",
"freq": "annual"
}
Case2: datedec is null. curr is populated.
{
"symbol": "XYZ",
"dateex": "2020-09-01",
"datedec": null,
"amount": "1.25",
"curr": "USD",
"freq": "annual"
}
Here is the struct I'm using to represent a dividend:
type Dividend struct {
symbol string `json:"symbol"`
dateex string `json:"dateex"`
datedec string `json:"datedec"`
amount string `json:"amount"`
curr string `json:"curr"`
freq string `json:"freq"`
}
The problem I'm having is how to insert either an empty string or null, into the database as NULL. I know I could use an omitempty json tag, but then how would I write a function to handle values I don't know will be missing? For example, Here is my current function to insert a dividend into postgresql using the jackc/pgx package:
func InsertDividend(d Dividend) error {
sql := `INSERT INTO dividends
(symbol, dateex, datedec, amount, curr, freq)
VALUES ($1, $2, $3, $4, $5, $6)`
conn, err := pgx.Connect(ctx, "DATABASE_URL")
// handle error
defer conn.Close(ctx)
tx, err := conn.Begin()
// handle error
defer tx.Rollback(ctx)
_, err = tx.Exec(ctx, sql, d.symbol, d.dateex, d.datedec, d.amount, d.curr, d.freq)
// handle error
}
err = tx.Commit(ctx)
// handle error
return nil
}
If a value (e.g. datedec or curr) is missing, then this function will error. From this post Golang Insert NULL into sql instead of empty string I saw how to solve Case1. But is there a more general way to handle both cases (null or empty string)?
I've been looking through the database/sql & jackc/pgx documentation but I have yet to find anything. I think the sql.NullString has potential but I'm not sure how I should be doing it.
Any suggestions will be appreciated. Thanks!
There are a number of ways you can represent NULL when writing to the database. sql.NullString is an option as is using a pointer (nil = null); the choice really comes down to what you find easer to understand. Rus Cox commented:
There's no effective difference. We thought people might want to use NullString because it is so common and perhaps expresses the intent more clearly than *string. But either will work.
I suspect that using pointers will be the simplest approach in your situation. For example the following will probably meet your needs:
type Dividend struct {
Symbol string `json:"symbol"`
Dateex string `json:"dateex"`
Datedec *string `json:"datedec"`
Amount string `json:"amount"`
Curr *string `json:"curr"`
Freq string `json:"freq"`
}
func unmarshal(in[]byte, div *Dividend) {
err := json.Unmarshal(in, div)
if err != nil {
panic(err)
}
// The below is not necessary unless if you want to ensure that blanks
// and missing values are both written to the database as NULL...
if div.Datedec != nil && len(*div.Datedec) == 0 {
div.Datedec = nil
}
if div.Curr != nil && len(*div.Curr) == 0 {
div.Curr = nil
}
}
Try it in the playground.
You can use the Dividend struct in the same way as you are now when writing to the database; the SQL driver will write the nil as a NULL.
you can also use pgtypes and get the SQL Driver value from any pgtype using the Value() func:
https://github.com/jackc/pgtype
https://github.com/jackc/pgtype/blob/master/text.go
type Dividend struct {
symbol pgtype.Text `json:"symbol"`
dateex pgtype.Text `json:"dateex"`
datedec pgtype.Text `json:"datedec"`
amount pgtype.Text `json:"amount"`
curr pgtype.Text `json:"curr"`
freq pgtype.Text `json:"freq"`
}
func InsertDividend(d Dividend) error {
// --> get SQL values from d
var err error
symbol, err := d.symbol.Value() // see https://github.com/jackc/pgtype/blob/4db2a33562c6d2d38da9dbe9b8e29f2d4487cc5b/text.go#L174
if err != nil {
return err
}
dateex, err := d.dateex.Value()
if err != nil {
return err
}
// ...
sql := `INSERT INTO dividends
(symbol, dateex, datedec, amount, curr, freq)
VALUES ($1, $2, $3, $4, $5, $6)`
conn, err := pgx.Connect(ctx, "DATABASE_URL")
defer conn.Close(ctx)
tx, err := conn.Begin()
defer tx.Rollback(ctx)
// --> exec your query using the SQL values your get earlier
_, err = tx.Exec(ctx, sql, symbol, dateex, datedec, amount, curr, freq)
// handle error
}
err = tx.Commit(ctx)
// handle error
return nil
}

How to remove RETURNING clause in the Create method of gorm package?

I got a little confused with the default behavior when creating a record for the gorm package.
city := models.City
if err := databases.DBGORM.Set("gorm:insert_option", "RETURNING *").Create(&city).Error; err != nil {
fmt.Println(err.Error())
}
In logs I see such SQL query:
INSERT INTO "my_scheme"."city" ("created_at","updated_at","deleted_at","name","country") VALUES ('2020-05-19 23:45:18','2020-05-19 23:45:18',NULL,'New York','USA') RETURNING * RETURNING "my_scheme"."city"."id"
As you can see from the query I have a double RETURNING clause which is not correct and raise an error.
Adding an id at the end of an SQL query seems to be the default behavior of the Create method. How can I change this behavior?
models.go:
package models
import (
"my_app/proto"
"time"
)
type City struct {
Id uint64
CreatedAt time.Time
UpdatedAt time.Time
DeletedAt *time.Time
proto.City
}
func (City) TableName() string {
return "my_scheme.city"
}
No, there is no way to change this behavior.
But if you want to get ID or timestamps (CreatedAt and UpdatedAt) after calling Create function, they will be automatically updated in your model passed by pointer.
If you have another field with a default value, add the default tag to this field in the model. And gorm will automatically update that field too after calling Create.
type City struct {
Id uint64
CreatedAt time.Time
UpdatedAt time.Time
DeletedAt *time.Time
SomeField *string `gorm:"default:test"`
}
// ...
city := models.City{}
if err := databases.DBGORM.Create(&city).Error; err != nil {
fmt.Println(err.Error())
}
fmt.Printf("%+v", city)
[2021-04-13 21:39:44] [1.06ms] INSERT INTO "cities" ("created_at","updated_at","deleted_at") VALUES ('2021-04-13 21:39:44','2021-04-13 21:39:44',NULL) RETURNING "cities"."id"
[1 rows affected or returned ]
[2021-04-13 21:39:44] [0.59ms] SELECT "some_field" FROM "cities" WHERE (id = 26)
[1 rows affected or returned ]
{
"Id": 26,
"CreatedAt": "2021-04-13T21:39:44.809605473+07:00",
"UpdatedAt": "2021-04-13T21:39:44.809605473+07:00",
"DeletedAt": null,
"SomeField": "test"
}
If you don't want to update the model at all, pass it to the Create method by value, not a pointer, and ignore the gorm.ErrUnaddressable error.

why my code error (mssql: Violation of PRIMARY KEY constraint 'PK_SMSBlast2'. Cannot insert duplicate key in object 'dbo.SMSBlast2')?

I have problem with my code , where I'm using library GORM to create or insert data to my restful api, print error say like this : (mssql: Violation of PRIMARY KEY constraint 'PK_SMSBlast2'. Cannot insert duplicate key in object 'dbo.SMSBlast2'. The duplicate key value is (0).)
package main
import (
"encoding/json"
"fmt"
"github.com/gorilla/mux"
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/mssql"
"log"
"net/http"
"time"
)
type SMSBlast struct {
SequenceID int gorm:"column:SequenceID"
MobilePhone string gorm:"column:MobilePhone"
Output string gorm:"column:Output"
WillBeSentDate *time.Time gorm:"column:WillBeSentDate"
SentDate *time.Time gorm:"column:SentDate"
Status *string gorm:"column:Status"
DtmUpd time.Time gorm:"column:DtmUpd"
}
func (SMSBlast) TableName() string {
return "SMSBlast2"
}
func insertSMSBlast(w http.ResponseWriter, r *http.Request){
fmt.Println("New Insert Created")
db, err := gorm.Open("mssql", "sqlserver://sa:#localhost:1433?database=CONFINS")
if err != nil{
panic("failed to connect database")
}
defer db.Close()
vars := mux.Vars(r)
sequenceid := vars["sequenceid"]
mobilephone := vars["mobilephone"]
output := vars["output"]
dtmupd := vars["dtmupd"]
sequenceid1,_ := strconv.Atoi(sequenceid)
prindata := db.Create(&SMSBlast{SequenceID: sequenceid1,MobilePhone: mobilephone, Output:output, DtmUpd: time.Now()})
fmt.Println(prindata)
}
func handleRequests(){
myRouter := mux.NewRouter().StrictSlash(true)
myRouter.HandleFunc("/smsblaststest",allSMSBlasts).Methods("POST")
myRouter.HandleFunc("/smsblaststestInsert/{MobilePhone}/{DtmUpd}", insertSMSBlast).Methods("POST")
log.Fatal(http.ListenAndServe(":8080",myRouter))
}
func main(){
fmt.Println("SMSBLASTS ORM")
handleRequests()
}
It appears that for your table, the SequenceID is the primary key.
Your insert statement
db.Create(&SMSBlast{MobilePhone: mobilephone, Output:output, DtmUpd: time.Now()})
does not update the SequenceID field, so it defaults to zero. That is causing your primary key violation. Try making the SequenceID an identity field (will automatically increment) or fix your code to determine the next sequence number and add it to your create statement

Get back newly inserted row in Postgres with sqlx

I use https://github.com/jmoiron/sqlx to make queries to Postgres.
Is it possible to get back the whole row data when inserting a new row?
Here is the query I run:
result, err := Db.Exec("INSERT INTO users (name) VALUES ($1)", user.Name)
Or should I just use my existing user struct as the source of truth about the new entry in the database?
Here is docs about transaction of sqlx:
The result has two possible pieces of data: LastInsertId() or RowsAffected(), the availability of which is driver dependent. In MySQL, for instance, LastInsertId() will be available on inserts with an auto-increment key, but in PostgreSQL, this information can only be retrieved from a normal row cursor by using the RETURNING clause.
So I made a complete demo for how to execute transaction using sqlx, the demo will create an address row in addresses table and then create a user in users table using the new address_id PK as user_address_id FK of the user.
package transaction
import (
"database/sql"
"github.com/jmoiron/sqlx"
"log"
"github.com/pkg/errors"
)
import (
"github.com/icrowley/fake"
)
type User struct {
UserID int `db:"user_id"`
UserNme string `db:"user_nme"`
UserEmail string `db:"user_email"`
UserAddressId sql.NullInt64 `db:"user_address_id"`
}
type ITransactionSamples interface {
CreateUserTransaction() (*User, error)
}
type TransactionSamples struct {
Db *sqlx.DB
}
func NewTransactionSamples(Db *sqlx.DB) ITransactionSamples {
return &TransactionSamples{Db}
}
func (ts *TransactionSamples) CreateUserTransaction() (*User, error) {
tx := ts.Db.MustBegin()
var lastInsertId int
err := tx.QueryRowx(`INSERT INTO addresses (address_id, address_city, address_country, address_state) VALUES ($1, $2, $3, $4) RETURNING address_id`, 3, fake.City(), fake.Country(), fake.State()).Scan(&lastInsertId)
if err != nil {
tx.Rollback()
return nil, errors.Wrap(err, "insert address error")
}
log.Println("lastInsertId: ", lastInsertId)
var user User
err = tx.QueryRowx(`INSERT INTO users (user_id, user_nme, user_email, user_address_id) VALUES ($1, $2, $3, $4) RETURNING *;`, 6, fake.UserName(), fake.EmailAddress(), lastInsertId).StructScan(&user)
if err != nil {
tx.Rollback()
return nil, errors.Wrap(err, "insert user error")
}
err = tx.Commit()
if err != nil {
return nil, errors.Wrap(err, "tx.Commit()")
}
return &user, nil
}
Here is test result:
☁ transaction [master] ⚡ go test -v -count 1 ./...
=== RUN TestCreateUserTransaction
2019/06/27 16:38:50 lastInsertId: 3
--- PASS: TestCreateUserTransaction (0.01s)
transaction_test.go:28: &transaction.User{UserID:6, UserNme:"corrupti", UserEmail:"reiciendis_quam#Thoughtstorm.mil", UserAddressId:sql.NullInt64{Int64:3, Valid:true}}
PASS
ok sqlx-samples/transaction 3.254s
This is a sample code that works with named queries and strong type structures for inserted data and ID.
Query and struct included to cover used syntax.
const query = `INSERT INTO checks (
start, status) VALUES (
:start, :status)
returning id;`
type Row struct {
Status string `db:"status"`
Start time.Time `db:"start"`
}
func InsertCheck(ctx context.Context, row Row, tx *sqlx.Tx) (int64, error) {
return insert(ctx, row, insertCheck, "checks", tx)
}
// insert inserts row into table using query SQL command
// table used only for loging, actual table name defined in query
// should not be used from services directly - implement strong type wrappers
// function expects query with named parameters
func insert(ctx context.Context, row interface{}, query string, table string, tx *sqlx.Tx) (int64, error) {
// convert named query to native parameters format
query, args, err := tx.BindNamed(query, row)
if err != nil {
return 0, fmt.Errorf("cannot bind parameters for insert into %q: %w", table, err)
}
var id struct {
Val int64 `db:"id"`
}
err = sqlx.GetContext(ctx, tx, &id, query, args...)
if err != nil {
return 0, fmt.Errorf("cannot insert into %q: %w", table, err)
}
return id.Val, nil
}
PostgreSQL supports RETURNING syntax for INSERT statements.
Example:
INSERT INTO users(...) VALUES(...) RETURNING id, name, foo, bar
Documentaion: https://www.postgresql.org/docs/9.6/static/sql-insert.html
The optional RETURNING clause causes INSERT to compute and return value(s) based on each row actually inserted (or updated, if an ON CONFLICT DO UPDATE clause was used). This is primarily useful for obtaining values that were supplied by defaults, such as a serial sequence number. However, any expression using the table's columns is allowed. The syntax of the RETURNING list is identical to that of the output list of SELECT. Only rows that were successfully inserted or updated will be returned.

Gorm Golang orm associations

I'm using Go with the GORM ORM.
I have the following structs. The relation is simple. One Town has multiple Places and one Place belongs to one Town.
type Place struct {
ID int
Name string
Town Town
}
type Town struct {
ID int
Name string
}
Now i want to query all places and get along with all their fields the info of the corresponding town.
This is my code:
db, _ := gorm.Open("sqlite3", "./data.db")
defer db.Close()
places := []Place{}
db.Find(&places)
fmt.Println(places)
My sample database has this data:
/* places table */
id name town_id
1 Place1 1
2 Place2 1
/* towns Table */
id name
1 Town1
2 Town2
i'm receiving this:
[{1 Place1 {0 }} {2 Mares Place2 {0 }}]
But i'm expecting to receive something like this (both places belongs to the same town):
[{1 Place1 {1 Town1}} {2 Mares Place2 {1 Town1}}]
How can i do such query ? I tried using Preloads and Related without success (probably the wrong way). I can't get working the expected result.
TownID must be specified as the foreign key. The Place struct gets like this:
type Place struct {
ID int
Name string
Description string
TownID int
Town Town
}
Now there are different approach to handle this. For example:
places := []Place{}
db.Find(&places)
for i, _ := range places {
db.Model(places[i]).Related(&places[i].Town)
}
This will certainly produce the expected result, but notice the log output and the queries triggered.
[4.76ms] SELECT * FROM "places"
[1.00ms] SELECT * FROM "towns" WHERE ("id" = '1')
[0.73ms] SELECT * FROM "towns" WHERE ("id" = '1')
[{1 Place1 {1 Town1} 1} {2 Place2 {1 Town1} 1}]
The output is the expected but this approach has a fundamental flaw, notice that for every place there is the need to do another db query which produce a n + 1 problem issue. This could solve the problem but will quickly gets out of control as the amount of places grow.
It turns out that the good approach is fairly simple using preloads.
db.Preload("Town").Find(&places)
That's it, the query log produced is:
[22.24ms] SELECT * FROM "places"
[0.92ms] SELECT * FROM "towns" WHERE ("id" in ('1'))
[{1 Place1 {1 Town1} 1} {2 Place2 {1 Town1} 1}]
This approach will only trigger two queries, one for all places, and one for all towns that has places. This approach scales well regarding of the amount of places and towns (only two queries in all cases).
You do not specify the foreign key of towns in your Place struct. Simply add TownId to your Place struct and it should work.
package main
import (
"fmt"
"github.com/jinzhu/gorm"
_ "github.com/mattn/go-sqlite3"
)
type Place struct {
Id int
Name string
Town Town
TownId int //Foregin key
}
type Town struct {
Id int
Name string
}
func main() {
db, _ := gorm.Open("sqlite3", "./data.db")
defer db.Close()
db.CreateTable(&Place{})
db.CreateTable(&Town{})
t := Town{
Name: "TestTown",
}
p1 := Place{
Name: "Test",
TownId: 1,
}
p2 := Place{
Name: "Test2",
TownId: 1,
}
err := db.Save(&t).Error
err = db.Save(&p1).Error
err = db.Save(&p2).Error
if err != nil {
panic(err)
}
places := []Place{}
err = db.Find(&places).Error
for i, _ := range places {
db.Model(places[i]).Related(&places[i].Town)
}
if err != nil {
panic(err)
} else {
fmt.Println(places)
}
}
To optimize query I use "in condition" in the same situation
places := []Place{}
DB.Find(&places)
keys := []uint{}
for _, value := range places {
keys = append(keys, value.TownID)
}
rows := []Town{}
DB.Where(keys).Find(&rows)
related := map[uint]Town{}
for _, value := range rows {
related[value.ID] = value
}
for key, value := range places {
if _, ok := related[value.TownID]; ok {
res[key].Town = related[value.TownID]
}
}
First change your model:
type Place struct {
ID int
Name string
Description string
TownID int
Town Town
}
And second, make preloading:
https://gorm.io/docs/preload.html
Click For Full Docs
Summary: preloading one-to-one relation: has one, belongs to
eager preload:
db.Preload("Orders").Preload("Profile").Find(&users)
join preload using inner join:
db.Joins("Orders").Joins("Profile").Find(&users)
preload all associations:
db.Preload(clause.Associations).Find(&users)
No need to loop for ids, just pluck the ids
townIDs := []uint{}
DB.Model(&Place{}).Pluck("town_id", &placeIDs)
towns := []Town{}
DB.Where(townIDs).Find(&towns)