skip updating created_at column while using gorm Update with associations - go-gorm

Background
We are using gorm association as we have nested hierarchy. We are using Create() for creation of records across multiple tables and Updates opeartion for updating the records.
While doing the update, we first fetch complete hierarchy, update it and then invoke update().
Issue
During the update we do not want to change the created_at timestamp.
For top level struct/ record, this is working fine and created_at column is not updated. However, for nested records created_at timestamp is changed always on performing update operation. On update, we just want to update the updated_at column and keep created_at intact.
Structs:
struct A {
ID string `gorm:"column:id;type:char(36); not null;primary_key"`
B []*B `gorm:"foreignKey:AID;references:ID"`
CreatedAt time.Time `gorm:"column:created_at;type:timestamp; not null"`
UpdatedAt time.Time `gorm:"column:updated_at;type:timestamp; not null"`
}
struct B {
ID string `gorm:"column:id;type:char(36); not null;primary_key"`
AID string `gorm:"column:aid;type:char(36); not null"`
C []*C `gorm:"foreignKey:BID;references:ID"`
CreatedAt time.Time `gorm:"column:created_at;type:timestamp; not null"`
UpdatedAt time.Time `gorm:"column:updated_at;type:timestamp; not null"`
}
struct C {
ID string `gorm:"column:id;type:char(36); not null;primary_key"`
BID string `gorm:"column:bid;type:char(36); not null"`
CreatedAt time.Time `gorm:"column:created_at;type:timestamp; not null"`
UpdatedAt time.Time `gorm:"column:updated_at;type:timestamp; not null"`
}
Below is the snippet of code how we are updating the records:
txn.Session(&gorm.Session{Context: ctx, FullSaveAssociations: true}).
Updates(record)
Query Executed:
UPDATE A SET updated_at=? WHERE id = ?
INSERT INTO B (id, aid, created_at, updated_at) ON DUPLICATE KEY UPDATE id=VALUES(id), aid=(aid),created_at=VALUES(),updated_at=VALUES(updated_at)
INSERT INTO C (id, bid, created_at, updated_at) ON DUPLICATE KEY UPDATE id=VALUES(id), bid=(aid),created_at=VALUES(),updated_at=VALUES(updated_at)
Things tried:
Omit: we have tried using db.omit("B.CreatedAt, B.C.CreatedAt"). However, this is not working.
<-:create: This leads to failure in Unit tests with reason: there is a remaining expectation which was not matched. Essentially, nested records are not persisted.

Related

In GoLang how do you scan in a sql result where some records may have a null join?

I'm not entirely sure the best way to phrase this problem, but hopefully my description and the code will show what I mean.
I'm building an API that uses a SQL db. One of the record types, Order can contain a Key, but the key ID is null until the Order is finalized. I have a function that queries the DB for all orders, joined with the Key DB so that the key details are populated if the ID is not null.
How do I scan in Orders where some have keyId null and some do not?
Order struct:
type orderDb struct {
id int
certificate certificateDb
location string
status string
knownRevoked bool
err sql.NullString // stored as json object
expires sql.NullInt32
dnsIdentifiers commaJoinedStrings // will be a comma separated list from storage
authorizations commaJoinedStrings // will be a comma separated list from storage
finalize string
finalizedKey *keyDb
certificateUrl sql.NullString
pem sql.NullString
validFrom sql.NullInt32
validTo sql.NullInt32
createdAt int
updatedAt int
}
keyDb struct
type keyDb struct {
id int
name string
description string
algorithmValue string
pem string
apiKey string
apiKeyViaUrl bool
createdAt int
updatedAt int
}
Partial SQL query & scan
query := `
SELECT
ao.id, ao.status, ao.known_revoked, ao.error, ao.dns_identifiers, ao.valid_from,
ao.valid_to, ao.created_at, ao.updated_at,
pk.id, pk.name,
c.id, c.name, c.subject,
aa.id, aa.name, aa.is_staging
FROM
acme_orders ao
LEFT JOIN private_keys pk on (ao.finalized_key_id = pk.id)
LEFT JOIN certificates c on (ao.certificate_id = c.id)
LEFT JOIN acme_accounts aa on (c.acme_account_id = aa.id)
err = rows.Scan(
&oneOrder.id,
&oneOrder.status,
&oneOrder.knownRevoked,
&oneOrder.err,
&oneOrder.dnsIdentifiers,
&oneOrder.validFrom,
&oneOrder.validTo,
&oneOrder.createdAt,
&oneOrder.updatedAt,
&oneOrder.finalizedKey.id,
&oneOrder.finalizedKey.name,
Essentially sometimes key id and name are null because the key is null. How do I set finalizedKey to null when this is the case, but scan in the values when the key isn't null?
I don't really want to do a separate query for keys because the slice of Orders could have 20, 50, or 100+ records and I don't want to do 101 queries to return a slice of 100 Orders.

Sqlx join table with same fields

I'm using Go 1.17 with Sqlx but I don't understand how I can join my table correctly.
Here is my structs (my join isn't logic I'm just testing jointure with sqlx)
Table album:
package album
import ".../api/v1/movie"
type Album struct {
ID string `json:"id"`
Title string `json:"title"`
Artist string `json:"artist"`
Price float64 `json:"price"`
MovieId int `json:"movie_id" db:"movie_id"`
movie.Movie
}
Table movie:
package movie
type Movie struct {
ID string `json:"id"`
Year uint16 `json:"year"`
RentNumber uint32 `json:"rent_number" db:"rent_number"`
Title string `json:"title"`
Author string `json:"author"`
Editor string `json:"editor"`
Index string `json:"index"`
Bib string `json:"bib"`
Ref string `json:"ref"`
Cat1 string `json:"cat_1" db:"cat_1"`
Cat2 string `json:"cat_2" db:"cat_2"`
}
And this is how I do my join:
albums := []Album{}
r.db.Select(&albums, "SELECT * FROM album a INNER JOIN movie m ON (m.id=a.movie_id)")
The problem is that these 2 tables have the same id field so the album id is overridden by the movie id and I lost it.
How can I do to ignore the movie id field (because I got it in the movie_id field and keep the field id for the album id ?
You can give one of your id fields an id tag like:
type Album struct {
ID string `json:"id" id:"album_id"`
Title string `json:"title"`
Artist string `json:"artist"`
Price float64 `json:"price"`
movie.Movie
}
and then make a query which aliases the id field to album_id like:
SELECT movie.id as id, album.id as album_id, ... FROM album ...
Just keep in mind that you now need to use this column name in your named queries as well.

GORM preload: How to use a custom table name

I have a GORM query with a preload that works just fine because I'm binding it to a struct called "companies" which is also the name of the corresponding database table:
var companies []Company
db.Preload("Subsidiaries").Joins("LEFT JOIN company_prod ON company_products.company_id = companies.id").Where("company_products.product_id = ?", ID).Find(&companies)
Now I want to do something similar, but bind the result to a struct that does not have a name that refers to the "companies" table:
var companiesFull []CompanyFull
db.Preload("Subsidiaries").Joins("LEFT JOIN company_prod ON company_products.company_id = companies.id").Where("company_products.product_id = ?", ID).Find(&companies)
I've simplified the second call for better understanding, the real call has more JOINs and returns more data, so it can't be bound to the "companies" struct.
I'm getting an error though:
column company_subsidiaries.company_full_id does not exist
The corresponding SQL query:
SELECT * FROM "company_subsidiaries" WHERE "company_subsidiaries"."company_full_id" IN (2,1)
There is no "company_subsidiaries.company_full_id", the correct query should be:
SELECT * FROM "company_subsidiaries" WHERE "company_subsidiaries"."company_id" IN (2,1)
The condition obviously gets generated from the name of the struct the result is being bound to. Is there any way to specify a custom name for this case?
I'm aware of the Tabler interface technique, however it doesn't work for Preload I believe (tried it, it changes the table name of the main query, but not the preload).
Updated: More info about the DB schema and structs
DB schema
TABLE companies
ID Primary key
OTHER FIELDS
TABLE products
ID Primary key
OTHER FIELDS
TABLE subsidiaries
ID Primary key
OTHER FIELDS
TABLE company_products
ID Primary key
Company_id Foreign key (companies.id)
Product_id Foreign key (products.id)
TABLE company_subsidiaries
ID Primary key
Company_id Foreign key (companies.id)
Subsidiary_id Foreign key (subsidiaries.id)
Structs
type Company struct {
Products []*Product `json:"products" gorm:"many2many:company_products;"`
ID int `json:"ID,omitempty"`
}
type CompanyFull struct {
Products []*Product `json:"products" gorm:"many2many:company_products;"`
Subsidiaries []*Subsidiary `json:"subsidiaries" gorm:"many2many:company_products;"`
ID int `json:"ID,omitempty"`
}
type Product struct {
Name string `json:"name"`
ID int `json:"ID,omitempty"`
}
type Subsidiary struct {
Name string `json:"name"`
ID int `json:"ID,omitempty"`
}
Generated SQL (by GORM)
SELECT * FROM "company_subsidiaries" WHERE "company_subsidiaries"."company_full_id" IN (2,1)
SELECT * FROM "subsidiaries" WHERE "subsidiaries"."id" IN (NULL)
SELECT companies.*, company_products.*, FROM "companies" LEFT JOIN company_products ON company_products.company_id = companies.id WHERE company_products.product_id = 1
Seems like the way to go in this case may be to customize the relationship in your CompanyFull model. Using joinForeignKey the following code works.
type CompanyFull struct {
Products []*Product `json:"products" gorm:"many2many:company_products;joinForeignKey:ID"`
Subsidiaries []*Subsidiary `json:"subsidiaries" gorm:"many2many:company_subsidiaries;joinForeignKey:ID"`
ID int `json:"ID,omitempty"`
}
func (CompanyFull) TableName() string {
return "companies"
}
func main(){
...
result := db.Preload("Subsidiaries").Joins("LEFT JOIN company_products ON company_products.company_id = companies.id").Where("company_products.product_id = ?", ID).Find(&companies)
if result.Error != nil {
log.Println(result.Error)
} else {
log.Printf("%#v", companies)
}
For more info regarding customizing the foreign keys used in relationships, take a look at the docs https://gorm.io/docs/many_to_many.html#Override-Foreign-Key

Efficiently mapping one-to-many many-to-many database to struct in Golang

Question
When dealing with a one-to-many or many-to-many SQL relationship in Golang, what is the best (efficient, recommended, "Go-like") way of mapping the rows to a struct?
Taking the example setup below I have tried to detail some approaches with Pros and Cons of each but was wondering what the community recommends.
Requirements
Works with PostgreSQL (can be generic but not include MySQL/Oracle specific features)
Efficiency - No brute forcing every combination
No ORM - Ideally using only database/sql and jmoiron/sqlx
Example
For sake of clarity I have removed error handling
Models
type Tag struct {
ID int
Name string
}
type Item struct {
ID int
Tags []Tag
}
Database
CREATE TABLE item (
id INT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY
);
CREATE TABLE tag (
id INT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
name VARCHAR(160),
item_id INT REFERENCES item(id)
);
Approach 1 - Select all Items, then select tags per item
var items []Item
sqlxdb.Select(&items, "SELECT * FROM item")
for i, item := range items {
var tags []Tag
sqlxdb.Select(&tags, "SELECT * FROM tag WHERE item_id = $1", item.ID)
items[i].Tags = tags
}
Pros
Simple
Easy to understand
Cons
Inefficient with the number of database queries increasing proportional with number of items
Approach 2 - Construct SQL join and loop through rows manually
var itemTags = make(map[int][]Tag)
var items = []Item{}
rows, _ := sqlxdb.Queryx("SELECT i.id, t.id, t.name FROM item AS i JOIN tag AS t ON t.item_id = i.id")
for rows.Next() {
var (
itemID int
tagID int
tagName string
)
rows.Scan(&itemID, &tagID, &tagName)
if tags, ok := itemTags[itemID]; ok {
itemTags[itemID] = append(tags, Tag{ID: tagID, Name: tagName,})
} else {
itemTags[itemID] = []Tag{Tag{ID: tagID, Name: tagName,}}
}
}
for itemID, tags := range itemTags {
items = append(Item{
ID: itemID,
Tags: tags,
})
}
Pros
A single database call and cursor that can be looped through without eating too much memory
Cons
Complicated and harder to develop with multiple joins and many attributes on the struct
Not too performant; more memory usage and processing time vs. more network calls
Failed approach 3 - sqlx struct scanning
Despite failing I want to include this approach as I find it to be my current aim of efficiency paired with development simplicity. My hope was by explicitly setting the db tag on each struct field sqlx could do some advanced struct scanning
var items []Item
sqlxdb.Select(&items, "SELECT i.id AS item_id, t.id AS tag_id, t.name AS tag_name FROM item AS i JOIN tag AS t ON t.item_id = i.id")
Unfortunately this errors out as missing destination name tag_id in *[]Item leading me to believe the StructScan is not advanced enough to recursively loop through rows (no criticism - it is a complicated scenario)
Possible approach 4 - PostgreSQL array aggregators and GROUP BY
While I am sure this will not work I have included this untested option to see if it could be improved upon so it may work.
var items = []Item{}
sqlxdb.Select(&items, "SELECT i.id as item_id, array_agg(t.*) as tags FROM item AS i JOIN tag AS t ON t.item_id = i.id GROUP BY i.id")
When I have some time I will try and run some experiments here.
the sql in postgres :
create schema temp;
set search_path = temp;
create table item
(
id INT generated by default as identity primary key
);
create table tag
(
id INT generated by default as identity primary key,
name VARCHAR(160),
item_id INT references item (id)
);
create view item_tags as
select id,
(
select
array_to_json(array_agg(row_to_json(taglist.*))) as array_to_json
from (
select tag.name, tag.id
from tag
where item_id = item.id
) taglist ) as tags
from item ;
-- golang query this maybe
select row_to_json(row)
from (
select * from item_tags
) row;
then golang query this sql:
select row_to_json(row)
from (
select * from item_tags
) row;
and unmarshall to go struct:
pro:
postgres manage the relation of data. add / update data with sql functions.
golang manage business model and logic.
it's easy way.
.
I can suggest another approach which I have used before.
You make a json of the tags in this case in the query and return it.
Pros: You have 1 call to the db, which aggregates the data, and all you have to do is parse the json into an array.
Cons: It's a bit ugly. Feel free to bash me for it.
type jointItem struct {
Item
ParsedTags string
Tags []Tag `gorm:"-"`
}
var jointItems []*jointItem
db.Raw(`SELECT
items.*,
(SELECT CONCAT(
'[',
GROUP_CONCAT(
JSON_OBJECT('id', id,
'name', name
)
),
']'
)) as parsed_tags
FROM items`).Scan(&jointItems)
for _, o := range jointItems {
var tempTags []Tag
if err := json.Unmarshall(o.ParsedTags, &tempTags) ; err != nil {
// do something
}
o.Tags = tempTags
}
Edit: code might behave weirdly so I find it better to use a temporary tags array when moving instead of using the same struct.
You can use carta.Map() from https://github.com/jackskj/carta
It tracks has-many relationships automatically.

Offset queries for non-unique columns

Let's say I have a table like this:
CREATE TABLE book (
id INTEGER PRIMARY KEY,
title VARCHAR(32) NOT NULL
);
and I want to support query with offset in order to support API that would return a list of books, ordered by non-unique title field with a given offset and limit.
The question here is what is the most efficient way [1] to define a unique index (or helper column or anything like that) for non-unique title column that could be used as an opaque offset token in queries where I'm using ORDER BY title. I thought about an index created on function that would return a unique numeric position of a row but I'm afraid that this would severely affect timings for INSERTs and UPDATEs for big tables and I think there is an optimal solution for that.
While this is straightforward for ORDER BY {unique_field} queries [2] I don't see an easy way to achieve the same for non-unique fields.
Also let's assume that solution should work in postgresql and mysql.
Notes:
[1] Since straightforward solutions like SELECT id, title FROM book ORDER BY title OFFSET [number] LIMIT [number] work extremely bad for big numeric offset values, I would introduce some sort of opaque token that would represent an offset in a given set in my API for getting book chunks.
So API method that would return a list of books ordered by title with a given offset would look like this (pseudocode):
BookPage getBooks(optional string offsetToken, int limit)
where BookPage is defined as follows:
class BookPage {
nonnull List<Book> books;
nonnull string offsetToken; // expected to be used to return a next page
}
Example use, book table contains 2*N books:
// 1st call
BookPage page1 = getBooks(null, 2); // get first 2 books
BookPage page2 = getBooks(page1.offsetToken, 2); // get next 2 books
BookPage page3 = getBooks(page2.offsetToken, 2); // get next 2 books
//...
BookPage pageN = getBooks(pageN-1.offsetToken, 2); // get last 2 books
and a concatenation of lists page1.books, page2.books, ... pageN.books would produce a list of books ordered by title in ascending order.
[2] For example: If getBooks API would use offset queries where books ordered by id (which is a primary key) offsetToken would be an id of the last book and implementation of getBooks API would look as follows (pseudocode):
BookPage getBook(optional string offsetToken, int limit) {
Long startId = (offsetToken != null ? toLong(offsetToken) : null);
page.books = (SELECT id, title FROM books
WHERE :startId IS null OR id>:startId
ORDER BY id
LIMIT :limit);
page.offsetToken = toString(lastElementOf(page.books).id)
return page;
}
The easiest and simplest solution that I found so far is to use a non-unique column in conjunction with the primary key. It slightly complicates select queries - e.g. for original problem you need to write something like (title = :title AND id > :id) OR (title > :title), where :title and :id constitute offsetToken (last item's title and id).