Submitting an SQL query with a slice parameter - sql

I have a Snowflake query where I'm trying to update a field on all items where another field is in a list which is submitted to the query as a variable:
UPDATE my_table SET download_enabled = ? WHERE provider_id = ? AND symbol IN (?)
I've tried doing this query using the gosnowflake.Array function like this:
enable := true
provider := 1
query := "UPDATE my_table SET download_enabled = ? WHERE provider_id = ? AND symbol IN (?)"
if _, err := client.db.ExecContext(ctx, query, enable, provider,
gosnowflake.Array(assets)); err != nil {
fmt.Printf("Error: %v", err)
}
However, this code fails with the following error:
002099 (42601): SQL compilation error: Batch size of 1 for bind variable 1 not the same as previous size of 2.
So then, how can I submit a variable representing a list of values to an SQL query?

I found a potential workaround, which is to submit each item in the list as a separate parameter explicitly:
func Delimit(s string, sep string, count uint) string {
return strings.Repeat(s+sep, int(count)-1) + s
}
func doQuery(enable bool, provider int, assets ...string) error {
query := fmt.Sprintf("UPDATE my_table SET download_enabled = ? " +
"WHERE provider_id = ? AND symbol IN (%s)", Delimit("?", ", ", uint(len(assets))))
params := []interface{}{enable, provider}
for _, asset := range assets {
params = append(params, asset)
}
if _, err := client.db.ExecContext(ctx, query, params...); err != nil {
return err
}
return nil
}
Needless to say this is a less elegant solution then what I wanted but it does work.

Related

How to Unit Test Gorm Golang Preload Query

Recently, I was trying to query data with preload function that gorm provided.
Here's the actual implementation looks like:
func (repo *SectorRepository) GetSectorsByStockCode(stockCode string) *model.Sector {
var foundStocks []*model.Stock
repo.DB.Where("code = ?", stockCode).Preload("Sectors").Find(&foundStocks)
if foundStocks == nil {
return nil
} else if len(foundStocks) == 0 {
return nil
} else if foundStocks[0].Sectors == nil {
return nil
} else if len(foundStocks[0].Sectors) == 0 {
return nil
} else {
return foundStocks[0].Sectors[0]
}
}
Then this is what I've done on my unit test:
func Test_SectorRepo_GetSectorsByStockCode(t *testing.T) {
db, gdb, sqlMock, err := InitTestSectorRepo()
require.NoError(t, err)
defer db.Close()
repository := repository.SectorRepository{DB: gdb}
var stockCode = "AAPL"
var expectedCode = "code1"
var expectedName = "name1"
var exchangeCode = "exchange_code1"
var sectorCode = "sector_code1"
var now = time.Now()
sqlMock.ExpectQuery(regexp.QuoteMeta(`SELECT * FROM "stocks" WHERE code = $1`)).WithArgs(stockCode).WillReturnRows(
sqlMock.NewRows([]string{"code", "name", "exchange_code", "created_at", "updated_at"}).
AddRow(expectedCode, expectedName, exchangeCode, now, now),
)
sqlMock.ExpectQuery(regexp.QuoteMeta(`SELECT * FROM "stock_sectors" WHERE ("stock_sectors"."stock_code","stock_sectors"."stock_exchange_code") IN (($1,$2))`)).
WithArgs(expectedCode, exchangeCode).WillReturnRows(sqlMock.NewRows([]string{"stock_code", "stock_exchange_code", "sector_code"}).AddRow(expectedCode, exchangeCode, sectorCode))
sqlMock.ExpectQuery(regexp.QuoteMeta(`SELECT * FROM "sectors" WHERE "sectors"."code" = $1'`)).WithArgs(stockCode).WillReturnRows(
sqlMock.NewRows([]string{"code", "name", "created_at", "updated_at"}).AddRow(sectorCode, "sector_name1", now, now),
)
res := repository.GetSectorsByStockCode(stockCode)
require.Equal(t, res.Name, expectedName)
}
But I got a failing message as provided below:
Running tool: /usr/local/go/bin/go test -timeout 30s -run ^Test_SectorRepo_GetSectorsByStockCode$ gitlab.com/xxx/yyy-service/test/repository
2022/07/02 20:38:10 [32m/Users/aaa/yyy-service/api/repository/sector_repository.go:38
[0m[33m[0.045ms] [34;1m[rows:1][0m SELECT * FROM "stock_sectors" WHERE ("stock_sectors"."stock_code","stock_sectors"."stock_exchange_code") IN (('code1','exchange_code1'))
2022/07/02 20:38:10 [31;1m/Users/aaa/yyy-service/api/repository/sector_repository.go:38 [35;1mQuery: could not match actual sql: "SELECT * FROM "sectors" WHERE "sectors"."code" = $1" with expected regexp "SELECT \* FROM "sectors" WHERE "sectors"\."code" = \$1'"
[0m[33m[0.025ms] [34;1m[rows:0][0m SELECT * FROM "sectors" WHERE "sectors"."code" = 'sector_code1'
2022/07/02 20:38:10 [31;1m/Users/aaa/yyy-service/api/repository/sector_repository.go:38 [35;1mQuery: could not match actual sql: "SELECT * FROM "sectors" WHERE "sectors"."code" = $1" with expected regexp "SELECT \* FROM "sectors" WHERE "sectors"\."code" = \$1'"
[0m[33m[1.300ms] [34;1m[rows:1][0m SELECT * FROM "stocks" WHERE code = 'AAPL'
--- FAIL: Test_SectorRepo_GetSectorsByStockCode (0.00s)
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x14499f6]
goroutine 4 [running]:
testing.tRunner.func1.2({0x1497620, 0x1896c40})
/usr/local/go/src/testing/testing.go:1209 +0x24e
testing.tRunner.func1()
/usr/local/go/src/testing/testing.go:1212 +0x218
panic({0x1497620, 0x1896c40})
/usr/local/go/src/runtime/panic.go:1038 +0x215
gitlab.com/xxx/yyy-service/test/repository_test.Test_SectorRepo_GetSectorsByStockCode(0x0)
/Users/aaa/yyy-service/test/repository/sector_repository_test.go:131 +0x8b6
testing.tRunner(0xc00011b380, 0x1526fd8)
/usr/local/go/src/testing/testing.go:1259 +0x102
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:1306 +0x35a
FAIL gitlab.com/xxx/yyy-service/test/repository 0.533s
FAIL
I thought in the first place that gorm preload function will create subsequent queries, hence I on the above code I was trying to evaluate all the subsequent queries.
Until now I'm still figuring it out if it's possible to test the preload gorm query by using sqlmock.
If there's anyone who could help to answer this problem will be much appreciated :)

How to make query parameters optional

In Postgres in Go, how can I make query parameters optional?
In this example status is an optional condition. If no status is passed all rows from table records will be fetched.
How to make query parameter &d.Status an optional
type QueryParams struct {
Status string `json:"status"`
}
func (r repo) GetRecords(d *QueryParams) ([]*Records, error) {
statusQuery := ""
if d.Status != "" {
statusQuery = " where status = $1 "
}
query := "select id, title, status from records " + statusQuery
rows, err := r.db.Query(query, &d.Status)
}
Query is variadic so you could build an []interface{} to hold the arguments:
args := []interface{}{}
and then to conditionally build the argument list:
if d.Status != "" {
statusQuery = " where status = $1 "
args = append(args, &d.Status)
}
When you run the query, expand the arguments using ...:
rows, err := r.db.Query(query, args...)
You may use a flexible WHERE clause, e.g.
SELECT id, title, status
FROM records
WHERE status = $1 OR $1 IS NULL;
The logic here is that if you provide a value for $1, it must match the status in order for a record to be returned. Otherwise, if $1 be left out (i.e. is NULL), then all records would be returned.
Note that to make this work from Go with the Postgres driver, you may need to do some extra massaging. I would try, at a first attempt, this:
statusQuery = "where status = $1 or $1::text is null"
query := "select id, title, status from records " + statusQuery
rows, err := r.db.Query(query, &d.Status)
Sometimes the driver can't figure out the type of the bound parameter $1. By explicitly casting it to text, the statement can be made to work.

Append text to column using go with pq driver

I am trying to append text to a database column using golang and the pq driver.
The error I am getting is panic: pq: could not determine data type of parameter $2
sqlStatement := `
UPDATE sales
SET description = concat(description, $2)
WHERE id = $1;`
id:=1
desc := "appendthis"
res, err := db.Exec(sqlStatement, id, desc)
if err != nil {
panic(err)
}
I also tried
SET description = description || $2 which didn't panic, though the field did not get updated.
Any ideas what I am doing wrong?

BigQuery Schema Update while copying data from other tables

I have table1 which has lots of nested columns. And table2 has some additional columns which may have nested columns. I'm using golang client library.
Is there any way to update the schema while copying from one table to another table..?
Sample Code :
dataset := client.Dataset("test")
copier=dataset.Table(table1).CopierFrom(dataset.Table(table2))
copier.WriteDisposition = bigquery.WriteAppend
copier.CreateDisposition = bigquery.CreateIfNeeded
job, err = copier.Run(ctx)
if err != nil {
fmt.Println("error while run :", err)
}
status, err = job.Wait(ctx)
if err != nil {
fmt.Println("error in wait :", err)
}
if err := status.Err(); err != nil {
fmt.Println("error in status :", err)
}
Some background first:
I created 2 Tables under the data collection test as following:
1 Schema: name (String), age (Integer)
"Varun", 19
"Raja", 27
2 Schema pet_name (String), type (String)
"jimmy", "dog"
"ramesh", "cat"
Note that the two relations have different schemas.
Here I am copying the contents of data table 2 into 1. The bigquery.WriteAppend tells the query engine to append results of table 2 into 1.
test := client.Dataset("test")
copier := test.Table("1").CopierFrom(test.Table("2"))
copier.WriteDisposition = bigquery.WriteAppend
if _, err := copier.Run(ctx); err != nil {
log.Fatalln(err)
}
query := client.Query("SELECT * FROM `test.1`;")
results, err := query.Read(ctx)
if err != nil {
log.Fatalln(err)
}
for {
row := make(map[string]bigquery.Value)
err := results.Next(&row)
if err == iterator.Done {
return
}
if err != nil {
log.Fatalln(err)
}
fmt.Println(row)
}
Nothing happens and the result is:
map[age:19 name:Varun]
map[name:Raja age:27]
Table 1, the destination is unchanged.
What if source and destination had the same schemas in the copy?
For example:
copier := test.Table("1").CopierFrom(test.Table("1"))
Then the copy succeeds! Add table 1 has twice the rows that initially had.
map[name:Varun age:19]
map[age:27 name:Raja]
map[name:Varun age:19]
map[name:Raja age:27]
But what if we somehow wanted to combine tables even with different schemas?
Well first you need a GCP Billing account as your are technically doing Data Manipulations (DML). You can get $300 free credit.
Then the following will work
query := client.Query("SELECT * FROM `test.2`;")
query.SchemaUpdateOptions = []string{"ALLOW_FIELD_ADDITION", "ALLOW_FIELD_RELAXATION"}
query.CreateDisposition = bigquery.CreateIfNeeded
query.WriteDisposition = bigquery.WriteAppend
query.QueryConfig.Dst = client.Dataset("test").Table("1")
results, err := query.Read(ctx)
And the result is
map[pet_name:<nil> type:<nil> name:Varun age:19]
map[name:Raja age:27 pet_name:<nil> type:<nil>]
map[pet_name:ramesh type:cat name:<nil> age:<nil>]
map[pet_name:jimmy type:dog name:<nil> age:<nil>]
EDIT
Instead of query.Read() you can use query.Run() if you just want to run the query and not fetch results back, as show below:
if _, err := query.Run(ctx); err != nil {
log.Fatalln(err)
}
Important things to note:
We have set query.SchemaUpdateOptions to include ALLOW_FIELD_ADDITION which will allow for the resulting table to have columns not originally present.
We have set query.WriteDisposition to bigquery.WriteAppend for data to be appended.
We have set query.QueryConfig.Dst to client.Dataset("test").Table("1") which means the result of the query will be uploaded to 1.
Values that are not in both tables but in just one are nullified or set to nil in Golang sense.
This hack will give you the same results as combining two tables.
Hope this helps.

Golang SQL query variable substituion

I have sql query that needs variable substitution for better consumption of my go-kit service.
I have dep & org as user inputs which are part of my rest service, for instance: dep = 'abc' and org = 'def'.
I've tried few things like:
rows, err := db.Query(
"select name from table where department='&dep' and organisation='&org'",
)
And:
rows, err := db.Query(
"select name from table where department=? and organisation=?", dep , org,
)
That led to error: sql: statement expects 0 inputs; got 2
Only hard-coded values work and substitution fails .
I haven't found much help from oracle blogs regarding this and wondering if there is any way to approach this.
Parameter Placeholder Syntax (reference: http://go-database-sql.org/prepared.html )
The syntax for placeholder parameters in prepared statements is
database-specific. For example, comparing MySQL, PostgreSQL, and
Oracle:
MySQL PostgreSQL Oracle
===== ========== ======
WHERE col = ? WHERE col = $1 WHERE col = :col
VALUES(?, ?, ?) VALUES($1, $2, $3) VALUES(:val1, :val2, :val3)
For oracle you need to use :dep, :org as placeholders.
As #dakait stated, on your prepare statement you should use : placeholders.
So, for completeness, you would get it working with something like:
package main
import (
"database/sql"
"fmt"
"log"
)
// Output is an example struct
type Output struct {
Name string
}
const (
dep = "abc"
org = "def"
)
func main() {
query := "SELECT name from table WHERE department= :1 and organisation = :2"
q, err := db.Prepare(query)
if err != nil {
log.Fatal(err)
}
defer q.Close()
var out Output
if err := q.QueryRow(dep, org).Scan(&out.Name); err != nil {
log.Fatal(err)
}
fmt.Println(out.Name)
}