Problem reading uniqueidentifier from SQL response - sql

I have tried to find the solution for this problem, but keep running my head at the wall with this one.
This function is part of a Go SQL wrapper, and the function getJSON is called to extract the informations from the sql response.
The problem is, that the id parameter becomes jibberish and does not match the desired response, all the other parameters read are correct thou, so this really weirds me out.
Thank you in advance, for any attempt at figurring this problem out, it is really appreciated :-)
func getJSON(rows *sqlx.Rows) ([]byte, error) {
columns, err := rows.Columns()
rawResult := make([][]byte, len(columns))
dest := make([]interface{}, len(columns))
for i := range rawResult {
dest[i] = &rawResult[i]
}
defer rows.Close()
var results []map[string][]byte
for rows.Next() {
result := make(map[string][]byte, len(columns))
rows.Scan(dest...)
for i, raw := range rawResult {
if raw == nil {
result[columns[i]] = []byte("")
} else {
result[columns[i]] = raw
fmt.Println(columns[i] + " : " + string(raw))
}
}
results = append(results, result)
}
s, err := json.Marshal(results)
if err != nil {
panic(err)
}
rows.Close()
return s, nil
}
An example of the response, taking from the terminal:
id : r�b�X��M���+�2%
name : cat
issub : false
Expected result:
id : E262B172-B158-4DEF-8015-9BA12BF53225
name : cat
issub : false

That's not about type conversion.
An UUID (of any type; presently there are four) is defined to be a 128-bit-long lump of bytes, which is 128/8=16 bytes.
This means any bytes — not necessarily printable.
What you're after, is a string representation of an UUID value, which
Separates certain groups of bytes using dashes.
Formats each byte in these groups using hexadecimal (base-16) representation.
Since base-16 positional count represents values 0 through 15 using a single digit ('0' through 'F'), a single byte is represented by two such digits — a digit per each group of 4 bits.
I think any sensible UUID package should implement a "decoding" function/method which would produce a string representation out of those 16 bytes.
I have picked a random package produced by performing this search query, and it has github.com/google/uuid.FromBytes which produces an UUID from a given byte slice, and the type of the resulting value implements the String() method which produces what you're after.

Related

Get pointers to all fields of a struct dynamically using reflection

I'm trying to build a simple orm layer for golang.
Which would take a struct and generate the cols [] which can then be passed to sql function
rows.Scan(cols...) which takes pointers of fields in the struct corresponding to each of the columns it has found in the result set
Here is my example struct
type ExampleStruct struct {
ID int64 `sql:"id"`
aID string `sql:"a_id"`
UserID int64 `sql:"user_id"`
And this is my generic ORM function
func GetSqlColumnToFieldMap(model *ExampleStruct) map[string]interface{} {
typeOfModel := reflect.TypeOf(*model)
ValueOfModel := reflect.ValueOf(*model)
columnToDataPointerMap := make(map[string]interface{})
for i := 0; i < ValueOfModel.NumField(); i++ {
sql_column := typeOfModel.Field(i).Tag.Get("sql")
structValue := ValueOfModel.Field(i)
columnToDataPointerMap[sql_column] = structValue.Addr()
}
return columnToDataPointerMap
}
Once this method works fine i can use the map it generates to create an ordered list of sql pointers according to the column_names i get in rows() object
However i get below error on the .Addr() method call
panic: reflect.Value.Addr of unaddressable value [recovered]
panic: reflect.Value.Addr of unaddressable value
Is it not possible to do this ?
Also in an ideal scenario i would want the method to take an interface instead of *ExampleStruct so that it can be reused across different db models.
The error says the value whose address you want to get is unaddressable. This is because even though you pass a pointer to GetSqlColumnToFieldMap(), you immediately dereference it and work with a non-pointer value later on.
This value is wrapped in an interface{} when passed to reflect.ValueOf(), and values wrappped in interfaces are not addressable.
You must not dereference the pointer, but instead use Type.Elem() and Value.Elem() to get the element type and pointed value.
Something like this:
func GetSqlColumnToFieldMap(model *ExampleStruct) map[string]interface{} {
t := reflect.TypeOf(model).Elem()
v := reflect.ValueOf(model).Elem()
columnToDataPointerMap := make(map[string]interface{})
for i := 0; i < v.NumField(); i++ {
sql_column := t.Field(i).Tag.Get("sql")
structValue := v.Field(i)
columnToDataPointerMap[sql_column] = structValue.Addr()
}
return columnToDataPointerMap
}
With this simple change it works! And it doesn't depend on the parameter type, you may change it to interface{} and pass any struct pointers.
func GetSqlColumnToFieldMap(model interface{}) map[string]interface{} {
// ...
}
Testing it:
type ExampleStruct struct {
ID int64 `sql:"id"`
AID string `sql:"a_id"`
UserID int64 `sql:"user_id"`
}
type Point struct {
X int `sql:"x"`
Y int `sql:"y"`
}
func main() {
fmt.Println(GetSqlColumnToFieldMap(&ExampleStruct{}))
fmt.Println(GetSqlColumnToFieldMap(&Point{}))
}
Output (try it on the Go Playground):
map[a_id:<*string Value> id:<*int64 Value> user_id:<*int64 Value>]
map[x:<*int Value> y:<*int Value>]
Note that Value.Addr() returns the address wrapped in a reflect.Value. To "unwrap" the pointer, use Value.Interface():
func GetSqlColumnToFieldMap(model interface{}) map[string]interface{} {
t := reflect.TypeOf(model).Elem()
v := reflect.ValueOf(model).Elem()
m := make(map[string]interface{})
for i := 0; i < v.NumField(); i++ {
colName := t.Field(i).Tag.Get("sql")
field := v.Field(i)
m[colName] = field.Addr().Interface()
}
return m
}
This will output (try it on the Go Playground):
map[a_id:0xc00007e008 id:0xc00007e000 user_id:0xc00007e018]
map[x:0xc000018060 y:0xc000018068]
For an in-depth introduction to reflection, please read blog post: The Laws of Reflection

Go SQL query inconsistency

I am experiencing some really weird inconsistencies when executing queries, and was wondering if anyone knew why.
Imagine I have a struct defined as follows:
type Result struct {
Afield string `db:"A"`
Bfield interface{} `db:"B"`
Cfield string `db:"C"`
Dfield string `db:"D"`
}
And a MySQL Table with the following cols:
A : VARCHAR(50)
B : INT
C : VARCHAR(50)
D : VARCHAR(50)
The query I would like to execute:
SELECT A, B, C, D FROM table WHERE A="a"
first way it can be executed:
db.Get(&result, `SELECT A, B, C, D FROM table WHERE A="a"`)
second way it can be executed:
db.Get(&result, `SELECT A, B, C, D FROM table WHERE A=?`, "a")
The inconsistencies I am experiencing are as follows: When executing the query the first way, the type of Bfield is int. However, when executing the query the second time, it is []uint8.
This outcome is occurring for example when B is 1.
Why is the type of Bfield different depending on how the query is executed?
connection declaration:
// Connection is an interface for making queries.
type Connection interface {
Exec(query string, args ...interface{}) (sql.Result, error)
Get(dest interface{}, query string, args ...interface{}) error
Select(dest interface{}, query string, args ...interface{}) error
}
EDIT
This is also happening using the Go database/sql package + driver. The queries below are assigning Bfield to []uint8 and int64 respectively.
db is of type *sql.DB
query 1:
db.QueryRow(SELECT A, B, C, D FROM table WHERE A="a").Scan(&result.Afield, &result.Bfield, &result.Cfield, &result.Dfield)
-- > type of Bfield is []uint8
query 2:
db.QueryRow(SELECT A, B, C, D FROM table WHERE A=?, "a").Scan(&result.Afield, &result.Bfield, &result.Cfield, &result.Dfield)
--> type of Bfield is int64
EDIT
Something else to note, when chaining multiple WHERE clauses, as long as at least 1 is populated using ?, the query will return int. Otherwise if they are all populated in the string, it will return []uint8
Short answer: because the MySQL driver uses a different protocol for queries with and without parameters. Use a prepared statement to get consistent results.
The following explanation refers to the standard MySQL driver github.com/go-sql-driver/mysql, version 1.4
In the first case, the driver sends the query directly to MySQL, and interprets the result as a *textRows struct. This struct (almost) always decodes results into a byte slice, and leaves the conversion to a better type to the Go sql package. This works fine if the destination is an int, string, sql.Scanner etc, but not for interface{}.
In the second case, the driver detects that there are arguments and returns driver.ErrSkip. This causes the Go SQL package to use a PreparedStatement. And in that case, the MySQL driver uses a *binaryRows struct to interpret the results. This struct uses the declared column type (INT in this case) to decode the value, in this case to decode the value into an int64.
Fun fact: if you provide the interpolateParams=true parameter to the database DSN (e.g. "root:testing#/mysql?interpolateParams=true"), the MySQL driver will prepare the query on the client side, and not use a PreparedStatement. At this point both types of query behave the same.
A small proof of concept:
package main
import (
"database/sql"
"log"
_ "github.com/go-sql-driver/mysql"
)
type Result struct {
Afield string
Bfield interface{}
}
func main() {
db, err := sql.Open("mysql", "root:testing#/mysql")
if err != nil {
log.Fatal(err)
}
defer db.Close()
if _, err = db.Exec(`CREATE TABLE IF NOT EXISTS mytable(A VARCHAR(50), B INT);`); err != nil {
log.Fatal(err)
}
if _, err = db.Exec(`DELETE FROM mytable`); err != nil {
log.Fatal(err)
}
if _, err = db.Exec(`INSERT INTO mytable(A, B) VALUES ('a', 3)`); err != nil {
log.Fatal(err)
}
var (
usingLiteral Result
usingParam Result
usingLiteralPrepared Result
)
row := db.QueryRow(`SELECT B FROM mytable WHERE A='a'`)
if err := row.Scan(&usingLiteral.Bfield); err != nil {
log.Fatal(err)
}
row = db.QueryRow(`SELECT B FROM mytable WHERE A=?`, "a")
if err := row.Scan(&usingParam.Bfield); err != nil {
log.Fatal(err)
}
stmt, err := db.Prepare(`SELECT B FROM mytable WHERE A='a'`)
if err != nil {
log.Fatal(err)
}
defer stmt.Close()
row = stmt.QueryRow()
if err := row.Scan(&usingLiteralPrepared.Bfield); err != nil {
log.Fatal(err)
}
log.Printf("Type when using literal: %T", usingLiteral.Bfield) // []uint8
log.Printf("Type when using param: %T", usingParam.Bfield) // int64
log.Printf("Type when using prepared: %T", usingLiteralPrepared.Bfield) // int64
}
Your first SQL string, in MySql is ambigous and can have too meaning as explained on StackOverflow in following address
When to use single quotes, double quotes, and back ticks in MySQL
Depending on SQL-MODE, your SQL command can be interpreted as
SELECT A, B, C, D FROM table WHERE A='a'
that is what I think you are expecting.
or as
SELECT A, B, C, D FROM table WHERE A=`a`
To avoid this ambiguity, can you make a new FIRST test in replacing double quotes by single quote ?
If the same behavior continue to be there, my answer is not a good response.
If BOTH SQL select return same value, your question has been solved.
Using ` character, you pass a variable name and not a string value !

Use Gob to write logs to a file in an append style

Would it be possible to use Gob encoding for appending structs in series to the same file using append? It works for writing, but when reading with the decoder more than once I run into:
extra data in buffer
So I wonder if that's possible in the first place or whether I should use something like JSON to append JSON documents on a per line basis instead. Because the alternative would be to serialize a slice, but then again reading it as a whole would defeat the purpose of append.
The gob package wasn't designed to be used this way. A gob stream has to be written by a single gob.Encoder, and it also has to be read by a single gob.Decoder.
The reason for this is because the gob package not only serializes the values you pass to it, it also transmits data to describe their types:
A stream of gobs is self-describing. Each data item in the stream is preceded by a specification of its type, expressed in terms of a small set of predefined types.
This is a state of the encoder / decoder–about what types and how they have been transmitted–, a subsequent new encoder / decoder will not (cannot) analyze the "preceeding" stream to reconstruct the same state and continue where a previous encoder / decoder left off.
Of course if you create a single gob.Encoder, you may use it to serialize as many values as you'd like to.
Also you can create a gob.Encoder and write to a file, and then later create a new gob.Encoder, and append to the same file, but you must use 2 gob.Decoders to read those values, exactly matching the encoding process.
As a demonstration, let's follow an example. This example will write to an in-memory buffer (bytes.Buffer). 2 subsequent encoders will write to it, then we will use 2 subsequent decoders to read the values. We'll write values of this struct:
type Point struct {
X, Y int
}
For short, compact code, I use this "error handler" function:
func he(err error) {
if err != nil {
panic(err)
}
}
And now the code:
const n, m = 3, 2
buf := &bytes.Buffer{}
e := gob.NewEncoder(buf)
for i := 0; i < n; i++ {
he(e.Encode(&Point{X: i, Y: i * 2}))
}
e = gob.NewEncoder(buf)
for i := 0; i < m; i++ {
he(e.Encode(&Point{X: i, Y: 10 + i}))
}
d := gob.NewDecoder(buf)
for i := 0; i < n; i++ {
var p *Point
he(d.Decode(&p))
fmt.Println(p)
}
d = gob.NewDecoder(buf)
for i := 0; i < m; i++ {
var p *Point
he(d.Decode(&p))
fmt.Println(p)
}
Output (try it on the Go Playground):
&{0 0}
&{1 2}
&{2 4}
&{0 10}
&{1 11}
Note that if we'd use only 1 decoder to read all the values (looping until i < n + m, we'd get the same error message you posted in your question when the iteration reaches n + 1, because the subsequent data is not a serialized Point, but the start of a new gob stream.
So if you want to stick with the gob package for doing what you want to do, you have to slightly modify, enhance your encoding / decoding process. You have to somehow mark the boundaries when a new encoder is used (so when decoding, you'll know you have to create a new decoder to read subsequent values).
You may use different techniques to achieve this:
You may write out a number, a count before you proceed to write values, and this number would tell how many values were written using the current encoder.
If you don't want to or can't tell how many values will be written with the current encoder, you may opt to write out a special end-of-encoder value when you don't write more values with the current encoder. When decoding, if you encounter this special end-of-encoder value, you'll know you have to create a new decoder to be able to read more values.
Some things to note here:
The gob package is most efficient, most compact if only a single encoder is used, because each time you create and use a new encoder, the type specifications will have to be re-transmitted, causing more overhead, and making the encoding / decoding process slower.
You can't seek in the data stream, you can only decode any value if you read the whole file from the beginning up until the value you want. Note that this somewhat applies even if you use other formats (such as JSON or XML).
If you want seeking functionality, you'd need to manage an index file separately, which would tell at which positions new encoders / decoders start, so you could seek to that position, create a new decoder, and start reading values from there.
Check a related question: Efficient Go serialization of struct to disk
In addition to the above, I suggest using an intermediate structure to exclude the gob header:
package main
import (
"bytes"
"encoding/gob"
"fmt"
"io"
"log"
)
type Point struct {
X, Y int
}
func main() {
buf := new(bytes.Buffer)
enc, _, err := NewEncoderWithoutHeader(buf, new(Point))
if err != nil {
log.Fatal(err)
}
enc.Encode(&Point{10, 10})
fmt.Println(buf.Bytes())
}
type HeaderSkiper struct {
src io.Reader
dst io.Writer
}
func (hs *HeaderSkiper) Read(p []byte) (int, error) {
return hs.src.Read(p)
}
func (hs *HeaderSkiper) Write(p []byte) (int, error) {
return hs.dst.Write(p)
}
func NewEncoderWithoutHeader(w io.Writer, sample interface{}) (*gob.Encoder, *bytes.Buffer, error) {
hs := new(HeaderSkiper)
hdr := new(bytes.Buffer)
hs.dst = hdr
enc := gob.NewEncoder(hs)
// Write sample with header info
if err := enc.Encode(sample); err != nil {
return nil, nil, err
}
// Change writer
hs.dst = w
return enc, hdr, nil
}
func NewDecoderWithoutHeader(r io.Reader, hdr *bytes.Buffer, dummy interface{}) (*gob.Decoder, error) {
hs := new(HeaderSkiper)
hs.src = hdr
dec := gob.NewDecoder(hs)
if err := dec.Decode(dummy); err != nil {
return nil, err
}
hs.src = r
return dec, nil
}
Additionally to great icza answer, you could use the following trick to append to a gob file with already written data: when append the first time write and discard the first encode:
Create the file Encode gob as usual (first encode write headers)
Close file
Open file for append
Using and intermediate writer encode dummy struct (which write headers)
Reset the writer
Encode gob as usual (writes no headers)
Example:
package main
import (
"bytes"
"encoding/gob"
"fmt"
"io"
"io/ioutil"
"log"
"os"
)
type Record struct {
ID int
Body string
}
func main() {
r1 := Record{ID: 1, Body: "abc"}
r2 := Record{ID: 2, Body: "def"}
// encode r1
var buf1 bytes.Buffer
enc := gob.NewEncoder(&buf1)
err := enc.Encode(r1)
if err != nil {
log.Fatal(err)
}
// write to file
err = ioutil.WriteFile("/tmp/log.gob", buf1.Bytes(), 0600)
if err != nil {
log.Fatal()
}
// encode dummy (which write headers)
var buf2 bytes.Buffer
enc = gob.NewEncoder(&buf2)
err = enc.Encode(Record{})
if err != nil {
log.Fatal(err)
}
// remove dummy
buf2.Reset()
// encode r2
err = enc.Encode(r2)
if err != nil {
log.Fatal(err)
}
// open file
f, err := os.OpenFile("/tmp/log.gob", os.O_WRONLY|os.O_APPEND, 0600)
if err != nil {
log.Fatal(err)
}
// write r2
_, err = f.Write(buf2.Bytes())
if err != nil {
log.Fatal(err)
}
// decode file
data, err := ioutil.ReadFile("/tmp/log.gob")
if err != nil {
log.Fatal(err)
}
var r Record
dec := gob.NewDecoder(bytes.NewReader(data))
for {
err = dec.Decode(&r)
if err == io.EOF {
break
}
if err != nil {
log.Fatal(err)
}
fmt.Println(r)
}
}

Go SQL scanned rows getting overwritten

I'm trying to read all the rows from a table on a SQL server and store them in string slices to use for later. The issue I'm running into is that the previously scanned rows are getting overwritten every time I scan a new row, even though I've converted all the mutable byte slices to immutable strings and saved the result slices to another slice. Here is the code I'm using:
rawResult := make([]interface{}, len(cols)) // holds anything that could be in a row
result := make([]string, len(cols)) // will hold all row elements as strings
var results [][]string // will hold all the result string slices
dest := make([]interface{}, len(cols)) // temporary, to pass into scan
for i, _ := range rawResult {
dest[i] = &rawResult[i] // fill dest with pointers to rawResult to pass into scan
}
for rows.Next() { // for each row
err = rows.Scan(dest...) // scan the row
if err != nil {
log.Fatal("Failed to scan row", err)
}
for i, raw := range rawResult { // for each scanned byte slice in a row
switch rawtype := raw.(type){ // determine type, convert to string
case int64:
result[i] = strconv.FormatInt(raw.(int64), 10)
case float64:
result[i] = strconv.FormatFloat(raw.(float64), 'f', -1, 64)
case bool:
result[i] = strconv.FormatBool(raw.(bool))
case []byte:
result[i] = string(raw.([]byte))
case string:
result[i] = raw.(string)
case time.Time:
result[i] = raw.(time.Time).String()
case nil:
result[i] = ""
default: // shouldn't actually be reachable since all types have been covered
log.Fatal("Unexpected type %T", rawtype)
}
}
results = append(results, result) // append the result to our slice of results
}
I'm sure this has something to do with the way Go handles variables and memory, but I can't seem to fix it. Can somebody explain what I'm not understanding?
You should create new slice for each data row. Notice, that a slice has a pointer to underlying array, so every slice you added into results have same pointer on actual data array. That's why you have faced with that behaviour.
When you create a slice using func make() it return a type (Not a pointer to type). But it does not allocate new memory each time a element is reassigned. Hence
result := make([]string, 5)
will have fix memory to contain 5 strings. when a element is reassigned, it occupies same memory as before hence overriding the old value.
Hopefully following example make things clear.
http://play.golang.org/p/3w2NtEHRuu
Hence in your program you are changing the content of the same memory and appending it again and again. To solve this problem you should create your result slice inside the loop.
Move result := make([]string, len(cols)) into your for loop that loops over the available rows.

Cross-database prepared statement binding (like and where in) in Golang

After reading many tutorials, I found that there are many ways to bind arguments on prepared statement in Go, some of them
SELECT * FROM bla WHERE x = ?col1 AND y = ?col2
SELECT * FROM bla WHERE x = ? AND y = ?
SELECT * FROM bla WHERE x = :col1 AND y = :col2
SELECT * FROM bla WHERE x = $1 AND y = $2
First question, what is the cross-database way to bind arguments? (that works on any database)
Second question, none of the tutorial I've read mention about LIKE statement, how to bind arguments for LIKE-statement correctly?
SELECT * FROM bla WHERE x LIKE /*WHAT?*/
Third question, also none of them give an example for IN statement, how to bind arguments for IN statement correctly?
`SELECT * FROM bla WHERE x IN ( /*WHAT?*/ )
What is the cross-database way to bind arguments?
With database/sql, there is none. Each database has its own way to represent parameter placeholders. The Go database/sql package does not provide any normalization facility for the prepared statements. Prepared statement texts are just passed to the underlying driver, and the driver typically just sends them unmodified to the database server (or library for embedded databases).
How to bind arguments for LIKE-statement correctly?
You can use parameter placeholders after a like statement and bind it as a string. For instance, you could write a prepared statement as:
SELECT a from bla WHERE b LIKE ?
Here is an example (error management handling omitted).
package main
import (
"database/sql"
"fmt"
_ "github.com/go-sql-driver/mysql"
)
// > select * from bla ;
// +------+------+
// | a | b |
// +------+------+
// | toto | titi |
// | bobo | bibi |
// +------+------+
func main() {
// Open connection
db, err := sql.Open("mysql", "root:XXXXXXX#/test")
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
defer db.Close()
// Prepare statement for reading data
stmtOut, err := db.Prepare("SELECT a FROM bla WHERE b LIKE ?")
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
defer stmtOut.Close()
var a string
b := "bi%" // LIKE 'bi%'
err = stmtOut.QueryRow(b).Scan(&a)
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
fmt.Printf("a = %s\n", a)
}
Note that the % character is part of the bound string, not of the query text.
How to bind arguments for IN statement correctly?
None of the databases I know allows binding a list of parameters directly with a IN clause. This is not a limitation of database/sql or the drivers, but this is simply not supported by most database servers.
You have several ways to work the problem around:
you can build a query with a fixed number of placeholders in the IN clause. Only bind the parameters you are provided with, and complete the other placeholders by the NULL value. If you have more values than the fixed number you have chosen, just execute the query several times. This is not extremely elegant, but it can be effective.
you can build multiple queries with various number of placeholders. One query for IN ( ? ), a second query for IN (?, ?), a third for IN (?,?,?), etc ... Keep those prepared queries in a statement cache, and choose the right one at runtime depending on the number of input parameters. Note that it takes memory, and generally the maximum number of prepared statements is limited, so it cannot be used when the number of parameters is high.
if the number of input parameters is high, insert them in a temporary table, and replace the query with the IN clause by a join with the temporary table. It is effective if you manage to perform the insertion in the temporary table in one roundtrip. With Go and database/sql, it is not convenient because there is no way to batch queries.
Each of these solutions has drawbacks. None of them is perfect.
I'm a newbie to Go but just to answer the first part:
First question, what is the cross-database way to bind arguments? (that works on any database)
If you use sqlx, which is a superset of the built-in sql package, then you should be able to use sqlx.DB.Rebind to achieve that.
I had this same question, and after reading the answers started to look for other solution on how to bind arguments for the IN statement.
Here is an example of what I did, not the most elegant solution, but works for me.
What I did was to create a select query with the parameters statically set on the query, and not using the bind feature at all.
It could be a good idea to sanitize the string that comes from the Marshal command, to be sure and safe, but I don't need it now.
package main
import (
"database/sql"
"encoding/json"
"fmt"
"log"
_ "github.com/go-sql-driver/mysql"
)
type Result struct {
Identifier string
Enabled bool
}
func main() {
// Open connection
db, err := sql.Open("mysql", "username:password#tcp(server-host)/my-database")
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
defer db.Close()
// this is an example of a variable list of IDs
idList := []string{"ID1", "ID2", "ID3", "ID4", "ID5", "IDx"}
// convert the list to a JSON string
formatted, _ := json.Marshal(idList)
// a JSON array starts and ends with '[]' respectivelly, so we replace them with '()'
formatted[0] = '('
formatted[len(formatted)-1] = ')'
// create a static select query
query := fmt.Sprintf("SELECT identifier, is_enabled FROM some_table WHERE identifier in %s", string(formatted))
// prepare que query
rows, err := db.Query(query)
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
defer rows.Close()
var result []Result
// fetch rows
for rows.Next() {
var r0 Result
if err := rows.Scan(&r0.Identifier, &r0.Enabled); err != nil {
log.Fatal(err)
}
// append the row to the result
result = append(result, r0)
}
if err := rows.Err(); err != nil {
log.Fatal(err)
}
fmt.Printf("result = %v\n", result)
}