gorm raw sql query execution - sql

Am running a query to check if a table exists or not using the gorm orm for golang. Below is my code.
package main
import (
"fmt"
"log"
"gorm.io/driver/postgres"
"gorm.io/gorm"
_ "github.com/lib/pq"
)
// App sets up and runs the app
type App struct {
DB *gorm.DB
}
`const tableCreationQuery = `SELECT count (*)
FROM information_schema.TABLES
WHERE (TABLE_SCHEMA = 'api_test') AND (TABLE_NAME = 'Users')`
func ensureTableExists() {
if err := a.DB.Exec(tableCreationQuery); err != nil {
log.Fatal(err)
}
}`
The expected response should be either 1 or 0. I got this from another SO answer. Instead I get this
2020/09/03 00:27:18 &{0xc000148900 1 0xc000119ba0 0}
exit status 1
FAIL go-auth 0.287s
My untrained mind says its a pointer but how do I reference the returned values to determine what was contained within?

If you want to check if your SQL statement was successfully executed in GORM you can use the following:
tx := DB.Exec(sqlStr, args...)
if tx.Error != nil {
return false
}
return true
However in your example are using a SELECT statement then you need to check the result, which will be better suited to use the DB.Raw() method like below
var exists bool
DB.Raw(sqlStr).Row().Scan(&exists)
return exists

Related

Submitting an SQL query with a slice parameter

I have a Snowflake query where I'm trying to update a field on all items where another field is in a list which is submitted to the query as a variable:
UPDATE my_table SET download_enabled = ? WHERE provider_id = ? AND symbol IN (?)
I've tried doing this query using the gosnowflake.Array function like this:
enable := true
provider := 1
query := "UPDATE my_table SET download_enabled = ? WHERE provider_id = ? AND symbol IN (?)"
if _, err := client.db.ExecContext(ctx, query, enable, provider,
gosnowflake.Array(assets)); err != nil {
fmt.Printf("Error: %v", err)
}
However, this code fails with the following error:
002099 (42601): SQL compilation error: Batch size of 1 for bind variable 1 not the same as previous size of 2.
So then, how can I submit a variable representing a list of values to an SQL query?
I found a potential workaround, which is to submit each item in the list as a separate parameter explicitly:
func Delimit(s string, sep string, count uint) string {
return strings.Repeat(s+sep, int(count)-1) + s
}
func doQuery(enable bool, provider int, assets ...string) error {
query := fmt.Sprintf("UPDATE my_table SET download_enabled = ? " +
"WHERE provider_id = ? AND symbol IN (%s)", Delimit("?", ", ", uint(len(assets))))
params := []interface{}{enable, provider}
for _, asset := range assets {
params = append(params, asset)
}
if _, err := client.db.ExecContext(ctx, query, params...); err != nil {
return err
}
return nil
}
Needless to say this is a less elegant solution then what I wanted but it does work.

How to implement a PATCH with `database/sql`?

Let’s say you have a basic API (GET/POST/PATCH/DELETE) backed by an SQL database.
The PATCH call should only update the fields in the JSON payload that the user sends, without touching any of the other fields.
Imagine the table (let's call it sample) has id, string_a and string_b columns, and the struct which corresponds to it looks like:
type Sample struct {
ID int `json:"id"`
StringA string `json:"stringA"`
StringB string `json:"stringB"`
}
Let's say the user passes in { "stringA": "patched value" } as payload. The json will be unmarshalled to something that looks like:
&Sample{
ID: 0,
StringA: "patched value",
StringB: "",
}
For a project using database/sql, you’d write the query to patch the row something like:
// `id` is from the URL params
query := `UPDATE sample SET string_a=$1, string_b=$2 WHERE id=$3`
row := db.QueryRow(query, sample.StringA, sample.StringB, id)
...
That query would update the string_a column as expected, but it’d also update the string_b column to "", which is undesired behavior in this case. In essence, I’ve just created a PUT instead of a PATCH.
My immediate thought was - OK, that’s fine, let’s use strings.Builder to build out the query and only add a SET statement for those that have a non-nil/empty value.
However, in that case, if a user wanted to make string_a empty, how would they accomplish that?
Eg. the user makes a PATCH call with { "stringA": "" } as payload. That would get unmarshalled to something like:
&Sample{
ID: 0,
StringA: "",
StringB: "",
}
The “query builder” I was theorizing about would look at that and say “ok, those are all nil/empty values, don’t add them to the query” and no columns would be updated, which again, is undesired behavior.
I’m not sure how to write my API and the SQL queries it runs in a way that satisfies both cases. Any thoughts?
I think reasonable solution for smaller queries is to build UPDATE query and list of bound parameters dynamically while processing payload with logic that recognizes what was updated and what was left empty.
From my own experience this is clear and readable (if repetitive you can always iterate over struct members that share same logic or employ reflection and look at struct tags hints, etc.). Every (my) attempt to write universal solution for this ended up as very convoluted overkill supporting all sorts of corner-cases and behavioral differences between endpoints.
func patchSample(s Sample) {
var query strings.Builder
params := make([]interface{}, 0, 2)
// TODO Check if patch makes sense (e.g. id is non-zero, at least one patched value provided, etc.
query.WriteString("UPDATE sample SET")
if s.StringA != "" {
query.WriteString(" stringA = ?")
params = append(params, s.StringA)
}
if s.StringB != "" {
query.WriteString(" stringB = ?")
params = append(params, s.StringB)
}
query.WriteString(" WHERE id = ?")
params = append(params, s.ID)
fmt.Println(query.String(), params)
//_, err := db.Exec(query.String(), params...)
}
func main() {
patchSample(Sample{1, "Foo", ""})
patchSample(Sample{2, "", "Bar"})
patchSample(Sample{3, "Foo", "Bar"})
}
EDIT: In case "" is valid value for patching then it needs to be distinguishable from the default empty value. One way how to solve that for string is to use pointer which will default to nil if value is not present in payload:
type Sample struct {
ID int `json:"id"`
StringA *string `json:"stringA"`
StringB *string `json:"stringB"`
}
and then modify condition(s) to check if field was sent like this:
if s.StringA != nil {
query.WriteString(" stringA = ?")
params = append(params, *s.StringA)
}
See full example in playground: https://go.dev/play/p/RI7OsNEYrk6
For what it's worth, I solved the issue by:
Converting the request payload to a generic map[string]interface{}.
Implementing a query builder that loops through the map's keys to create a query.
Part of the reason I went this route is it fit all my requirements, and I didn't particularly like having *strings or *ints laying around.
Here is what the query builder looks like:
func patchQueryBuilder(id string, patch map[string]interface{}) (string, []interface{}, error) {
var query strings.Builder
params := make([]interface{}, 0)
query.WriteString("UPDATE some_table SET")
for k, v := range patch {
switch k {
case "someString":
if someString, ok := v.(string); ok {
query.WriteString(fmt.Sprintf(" some_string=$%d,", len(params)+1))
params = append(params, someString)
} else {
return "", []interface{}{}, fmt.Errorf("could not process some_string")
}
case "someBool":
if someBool, ok := v.(bool); ok {
query.WriteString(fmt.Sprintf(" some_bool=$%d,", len(params)+1))
params = append(params, someBool)
} else {
return "", []interface{}{}, fmt.Errorf("could not process some_bool")
}
}
}
if len(params) > 0 {
// Remove trailing comma to avoid syntax errors
queryString := fmt.Sprintf("%s WHERE id=$%d RETURNING *", strings.TrimSuffix(query.String(), ","), len(params)+1)
params = append(params, id)
return queryString, params, nil
} else {
return "", []interface{}{}, nil
}
}
Note that I'm using PostgreSQL, so I needed to provide numbered parameters to the query, eg $1, which is what params is used for. It's also returned from the function so that it can be used as follows:
// Build the patch query based on the payload
query, params, err := patchQueryBuilder(id, patch)
if err != nil {
return nil, err
}
// Use the query/params and get output
row := tx.QueryRowContext(ctx, query, params...)

PlaceHolderFormat doesn't replace the dollar sign for the parameter value during SQL using pgx driver for postgres

I am new to Go and am trying to check a password against a username in a postgresql database.
I can't get dollar substitution to occur and would rather not resort to concatenating strings.
I am currently using squirrel but also tried it without and didn't have much luck.
I have the following code:
package datalayer
import (
"database/sql"
"encoding/json"
"fmt"
"net/http"
sq "github.com/Masterminds/squirrel"
_ "github.com/jackc/pgx/v4/stdlib"
"golang.org/x/crypto/bcrypt"
"github.com/gin-gonic/gin"
)
var (
// for the database
db *sql.DB
)
func InitDB(sqlDriver string, dataSource string) error {
var err error
// Connect to the postgres db (sqlDriver is literal string "pgx")
db, err = sql.Open(sqlDriver, dataSource)
if err != nil {
panic(err)
}
return db.Ping()
}
// Create a struct that models the structure of a user, both in the request body, and in the DB
type Credentials struct {
Password string `json:"password", db:"password"`
Username string `json:"username", db:"username"`
}
func Signin(c *gin.Context) {
// Parse and decode the request body into a new `Credentials` instance
creds := &Credentials{}
err := json.NewDecoder(c.Request.Body).Decode(creds)
if err != nil {
// If there is something wrong with the request body, return a 400 status
c.Writer.WriteHeader(http.StatusBadRequest)
return
}
query := sq.
Select("password").
From("users").
Where("username = $1", creds.Username).
PlaceholderFormat(sq.Dollar)
// The line below doesn't substitute the $ sign, it shows this: SELECT password FROM users WHERE username = $1 [rgfdgfd] <nil>
fmt.Println(sq.
Select("password").
From("users").
Where("username = $1", creds.Username).
PlaceholderFormat(sq.Dollar).ToSql())
rows, sqlerr := query.RunWith(db).Query()
if sqlerr != nil {
panic(fmt.Sprintf("QueryRow failed: %v", sqlerr))
}
if err != nil {
// If there is an issue with the database, return a 500 error
c.Writer.WriteHeader(http.StatusInternalServerError)
return
}
// We create another instance of `Credentials` to store the credentials we get from the database
storedCreds := &Credentials{}
// Store the obtained password in `storedCreds`
err = rows.Scan(&storedCreds.Password)
if err != nil {
// If an entry with the username does not exist, send an "Unauthorized"(401) status
if err == sql.ErrNoRows {
c.Writer.WriteHeader(http.StatusUnauthorized)
return
}
// If the error is of any other type, send a 500 status
c.Writer.WriteHeader(http.StatusInternalServerError)
return
}
// Compare the stored hashed password, with the hashed version of the password that was received
if err = bcrypt.CompareHashAndPassword([]byte(storedCreds.Password), []byte(creds.Password)); err != nil {
// If the two passwords don't match, return a 401 status
c.Writer.WriteHeader(http.StatusUnauthorized)
}
fmt.Printf("We made it !")
// If we reach this point, that means the users password was correct, and that they are authorized
// The default 200 status is sent
}
I see the following when I check pgAdmin, which shows the dollar sign not being substituted:
The substitution of the placeholders is done by the postgres server, it SHOULD NOT be the job of the Go code, or squirrel, to do the substitution.
When you are executing a query that takes parameters, a rough outline of what the database driver has to do is something like the following:
Using the query string, with placeholders untouched, a parse request is sent to the postgres server to create a prepared statement.
Using the parameter values and the identifier of the newly-created statement, a bind request is sent to make the statement ready for execution by creating a portal. A portal (similar to, but not the same as, a cursor) represents a ready-to-execute or already-partially-executed statement, with any missing parameter values filled in.
Using the portal's identifier an execute request is sent to the server which then executes the portal's query.
Note that the above steps are just a rough outline, in reality there are more request-response cycles involved between the db client and server.
And as far as pgAdmin is concerned I believe what it is displaying to you is the prepared statement as created by the parse request, although I can't tell for sure as I am not familiar with it.
In theory, a helper library like squirrel, or a driver library like pgx, could implement the substitution of parameters themselves and then send a simple query to the server. In general, however, given the possibility of SQL injections, it is better to leave it to the authority of the postgres server, in my opinion.
The PlaceholderFormat's job is to simply translate the placeholder to the specified format. For example you could write your SQL using the MySQL format (?,?,...) and then invoke the PlaceholderFormat(sql.Dollar) method to translate that into the PostgreSQL format ($1,$2,...).

BigQuery Schema Update while copying data from other tables

I have table1 which has lots of nested columns. And table2 has some additional columns which may have nested columns. I'm using golang client library.
Is there any way to update the schema while copying from one table to another table..?
Sample Code :
dataset := client.Dataset("test")
copier=dataset.Table(table1).CopierFrom(dataset.Table(table2))
copier.WriteDisposition = bigquery.WriteAppend
copier.CreateDisposition = bigquery.CreateIfNeeded
job, err = copier.Run(ctx)
if err != nil {
fmt.Println("error while run :", err)
}
status, err = job.Wait(ctx)
if err != nil {
fmt.Println("error in wait :", err)
}
if err := status.Err(); err != nil {
fmt.Println("error in status :", err)
}
Some background first:
I created 2 Tables under the data collection test as following:
1 Schema: name (String), age (Integer)
"Varun", 19
"Raja", 27
2 Schema pet_name (String), type (String)
"jimmy", "dog"
"ramesh", "cat"
Note that the two relations have different schemas.
Here I am copying the contents of data table 2 into 1. The bigquery.WriteAppend tells the query engine to append results of table 2 into 1.
test := client.Dataset("test")
copier := test.Table("1").CopierFrom(test.Table("2"))
copier.WriteDisposition = bigquery.WriteAppend
if _, err := copier.Run(ctx); err != nil {
log.Fatalln(err)
}
query := client.Query("SELECT * FROM `test.1`;")
results, err := query.Read(ctx)
if err != nil {
log.Fatalln(err)
}
for {
row := make(map[string]bigquery.Value)
err := results.Next(&row)
if err == iterator.Done {
return
}
if err != nil {
log.Fatalln(err)
}
fmt.Println(row)
}
Nothing happens and the result is:
map[age:19 name:Varun]
map[name:Raja age:27]
Table 1, the destination is unchanged.
What if source and destination had the same schemas in the copy?
For example:
copier := test.Table("1").CopierFrom(test.Table("1"))
Then the copy succeeds! Add table 1 has twice the rows that initially had.
map[name:Varun age:19]
map[age:27 name:Raja]
map[name:Varun age:19]
map[name:Raja age:27]
But what if we somehow wanted to combine tables even with different schemas?
Well first you need a GCP Billing account as your are technically doing Data Manipulations (DML). You can get $300 free credit.
Then the following will work
query := client.Query("SELECT * FROM `test.2`;")
query.SchemaUpdateOptions = []string{"ALLOW_FIELD_ADDITION", "ALLOW_FIELD_RELAXATION"}
query.CreateDisposition = bigquery.CreateIfNeeded
query.WriteDisposition = bigquery.WriteAppend
query.QueryConfig.Dst = client.Dataset("test").Table("1")
results, err := query.Read(ctx)
And the result is
map[pet_name:<nil> type:<nil> name:Varun age:19]
map[name:Raja age:27 pet_name:<nil> type:<nil>]
map[pet_name:ramesh type:cat name:<nil> age:<nil>]
map[pet_name:jimmy type:dog name:<nil> age:<nil>]
EDIT
Instead of query.Read() you can use query.Run() if you just want to run the query and not fetch results back, as show below:
if _, err := query.Run(ctx); err != nil {
log.Fatalln(err)
}
Important things to note:
We have set query.SchemaUpdateOptions to include ALLOW_FIELD_ADDITION which will allow for the resulting table to have columns not originally present.
We have set query.WriteDisposition to bigquery.WriteAppend for data to be appended.
We have set query.QueryConfig.Dst to client.Dataset("test").Table("1") which means the result of the query will be uploaded to 1.
Values that are not in both tables but in just one are nullified or set to nil in Golang sense.
This hack will give you the same results as combining two tables.
Hope this helps.

Golang SQL query variable substituion

I have sql query that needs variable substitution for better consumption of my go-kit service.
I have dep & org as user inputs which are part of my rest service, for instance: dep = 'abc' and org = 'def'.
I've tried few things like:
rows, err := db.Query(
"select name from table where department='&dep' and organisation='&org'",
)
And:
rows, err := db.Query(
"select name from table where department=? and organisation=?", dep , org,
)
That led to error: sql: statement expects 0 inputs; got 2
Only hard-coded values work and substitution fails .
I haven't found much help from oracle blogs regarding this and wondering if there is any way to approach this.
Parameter Placeholder Syntax (reference: http://go-database-sql.org/prepared.html )
The syntax for placeholder parameters in prepared statements is
database-specific. For example, comparing MySQL, PostgreSQL, and
Oracle:
MySQL PostgreSQL Oracle
===== ========== ======
WHERE col = ? WHERE col = $1 WHERE col = :col
VALUES(?, ?, ?) VALUES($1, $2, $3) VALUES(:val1, :val2, :val3)
For oracle you need to use :dep, :org as placeholders.
As #dakait stated, on your prepare statement you should use : placeholders.
So, for completeness, you would get it working with something like:
package main
import (
"database/sql"
"fmt"
"log"
)
// Output is an example struct
type Output struct {
Name string
}
const (
dep = "abc"
org = "def"
)
func main() {
query := "SELECT name from table WHERE department= :1 and organisation = :2"
q, err := db.Prepare(query)
if err != nil {
log.Fatal(err)
}
defer q.Close()
var out Output
if err := q.QueryRow(dep, org).Scan(&out.Name); err != nil {
log.Fatal(err)
}
fmt.Println(out.Name)
}