I met a problem when query from Database and tried to insert into a slice(contains some map[string]interface{})
Even I already used make to create a new memory block, the slice seems always mapping to a same memory block.
type DBResult []map[string]interface{}
func ResultRows(rows *sql.Rows, limit int) (DBResult, error) {
cols, err := rows.Columns()
if err != nil {
return nil, err
}
vals := make([]sql.RawBytes, len(cols))
scanArgs := make([]interface{}, len(vals))
for i := range vals {
scanArgs[i] = &vals[i]
}
if limit > QUERY_HARD_LIMIT {
limit = QUERY_HARD_LIMIT
}
res := make(DBResult, 0, limit)
for rows.Next() {
err = rows.Scan(scanArgs...)
m := make(map[string]interface{})
for i := range vals {
m[cols[i]] = vals[i]
}
/* Append m to res */
res = append(res, m)
/* The value of m has been changed */
fmt.Printf("lib: m:\n\n%s\n\n", m)
/* When printing res, always mapping to the same memory block */
fmt.Printf("lib: res:\n\n%s\n\n", res)
}
return res, err
}
The following is the result, you can find the contents of res are the same
m = map[comment:first_comment id:0]
res = [map[id:0 comment:first_comment]]
m = map[id:1 comment:first_comment]
res = [map[id:1 comment:first_comment] map[id:1 comment:first_comment]]
m = map[id:2 comment:first_comment]
res = [map[id:2 comment:first_comment] map[id:2 comment:first_comment] map[id:2 comment:first_comment]]
My expect of res = [map[id:0 comment:first_comment] map[id:1 comment:first_comment] map[id:2 comment:first_comment]]
Thanks for watching
according to the document of Rows (https://golang.org/pkg/database/sql/#Rows.Scan):
Scan copies the columns in the current row into the values pointed at by dest.
If an argument has type *[]byte, Scan saves in that argument a copy of the corresponding data. The copy is owned by the caller and can be modified and held indefinitely. The copy can be avoided by using an argument of type *RawBytes instead; see the documentation for RawBytes for restrictions on its use.
If an argument has type *interface{}, Scan copies the value provided by the underlying driver without conversion. If the value is of type []byte, a copy is made and the caller owns the result.
in your case, you use RawBytes as argument of Scan. That might be the problem you got. Try other type as argument of Scan function.
Related
I have tried to find the solution for this problem, but keep running my head at the wall with this one.
This function is part of a Go SQL wrapper, and the function getJSON is called to extract the informations from the sql response.
The problem is, that the id parameter becomes jibberish and does not match the desired response, all the other parameters read are correct thou, so this really weirds me out.
Thank you in advance, for any attempt at figurring this problem out, it is really appreciated :-)
func getJSON(rows *sqlx.Rows) ([]byte, error) {
columns, err := rows.Columns()
rawResult := make([][]byte, len(columns))
dest := make([]interface{}, len(columns))
for i := range rawResult {
dest[i] = &rawResult[i]
}
defer rows.Close()
var results []map[string][]byte
for rows.Next() {
result := make(map[string][]byte, len(columns))
rows.Scan(dest...)
for i, raw := range rawResult {
if raw == nil {
result[columns[i]] = []byte("")
} else {
result[columns[i]] = raw
fmt.Println(columns[i] + " : " + string(raw))
}
}
results = append(results, result)
}
s, err := json.Marshal(results)
if err != nil {
panic(err)
}
rows.Close()
return s, nil
}
An example of the response, taking from the terminal:
id : r�b�X��M���+�2%
name : cat
issub : false
Expected result:
id : E262B172-B158-4DEF-8015-9BA12BF53225
name : cat
issub : false
That's not about type conversion.
An UUID (of any type; presently there are four) is defined to be a 128-bit-long lump of bytes, which is 128/8=16 bytes.
This means any bytes — not necessarily printable.
What you're after, is a string representation of an UUID value, which
Separates certain groups of bytes using dashes.
Formats each byte in these groups using hexadecimal (base-16) representation.
Since base-16 positional count represents values 0 through 15 using a single digit ('0' through 'F'), a single byte is represented by two such digits — a digit per each group of 4 bits.
I think any sensible UUID package should implement a "decoding" function/method which would produce a string representation out of those 16 bytes.
I have picked a random package produced by performing this search query, and it has github.com/google/uuid.FromBytes which produces an UUID from a given byte slice, and the type of the resulting value implements the String() method which produces what you're after.
I am writing a generic code to query data from any RDS table. I have gone through many StackOverflow answers but none of it worked for me. I have gone through below links:-
panic: sql: expected 1 destination arguments in Scan, not <number> golang, pq, sql
How to query any table of RDS using Golang SDK
My first code is
package main
import (
"fmt"
)
type BA_Client struct {
ClientId int `json:ClientId;"`
CompanyName string `json:CompanyName;"`
CreateDate string `json:CreateDate;"`
}
func main() {
conn, _ := getConnection() // Det database connection
query := `select * from IMBookingApp.dbo.BA_Client
ORDER BY
ClientId ASC
OFFSET 56 ROWS
FETCH NEXT 10 ROWS ONLY ;`
var p []byte
err := conn.QueryRow(query).Scan(&p)
if err != nil {
fmt.Println("Error1", err)
}
fmt.Println("Data:", p)
var m BA_Client
err = json.Unmarshal(p, &m)
if err != nil {
fmt.Println("Error2", err)
}
}
My second code is
conn, _ := getConnection()
query := `select * from IMBookingApp.dbo.BA_Client__c
ORDER BY
ClientId__c ASC
OFFSET 56 ROWS
FETCH NEXT 10 ROWS ONLY ;`
rows, err := conn.Query(query)
if err != nil {
fmt.Println("Error:")
log.Fatal(err)
}
println("rows", rows)
defer rows.Close()
columns, err := rows.Columns()
fmt.Println("columns", columns)
if err != nil {
panic(err)
}
for rows.Next() {
receiver := make([]*string, len(columns))
err := rows.Scan(&receiver )
if err != nil {
fmt.Println("Error reading rows: " + err.Error())
}
fmt.Println("Data:", p)
fmt.Println("receiver", receiver)
}
With both the codes, I am getting the same error as below
sql: expected 3 destination arguments in Scan, not 1
Can it be because I am using SQL Server and not MySQL? Appreciate if anyone can help me to find the issue.
When you want to use a slice of inputs for your row scan, use the variadic 3-dots notation ... to convert the slice into individual parameters, like so:
err := rows.Scan(receiver...)
Your (three in this case) columns, using the variadic arguments will effectively expand like so:
// len(receiver) == 3
err := rows.Scan(receiver[0], receiver[1], receiver[2])
EDIT:
SQL Scan method parameter values must be of type interface{}. So we need an intermediate slice. For example:
is := make([]interface{}, len(receiver))
for i := range is {
is[i] = receiver[i]
// each is[i] will be of type interface{} - compatible with Scan()
// using the underlying concrete `*string` values from `receiver`
}
// ...
err := rows.Scan(is...)
// `receiver` will contain the actual `*string` typed items
try this (no ampersand in front of receiver) for second example:
err := rows.Scan(receiver)
receiver and *string are already a reference types. No need to add another reference to it.
specific field table
query := `select * from IMBookingApp.dbo.BA_Client__c
ORDER BY
ClientId__c ASC
OFFSET 56 ROWS
FETCH NEXT 10 ROWS ONLY ;`
change to
query := `select ClientId, CompanyName, CreateDate from IMBookingApp.dbo.BA_Client__c
ORDER BY
ClientId__c ASC
OFFSET 56 ROWS
FETCH NEXT 10 ROWS ONLY ;`
Would it be possible to use Gob encoding for appending structs in series to the same file using append? It works for writing, but when reading with the decoder more than once I run into:
extra data in buffer
So I wonder if that's possible in the first place or whether I should use something like JSON to append JSON documents on a per line basis instead. Because the alternative would be to serialize a slice, but then again reading it as a whole would defeat the purpose of append.
The gob package wasn't designed to be used this way. A gob stream has to be written by a single gob.Encoder, and it also has to be read by a single gob.Decoder.
The reason for this is because the gob package not only serializes the values you pass to it, it also transmits data to describe their types:
A stream of gobs is self-describing. Each data item in the stream is preceded by a specification of its type, expressed in terms of a small set of predefined types.
This is a state of the encoder / decoder–about what types and how they have been transmitted–, a subsequent new encoder / decoder will not (cannot) analyze the "preceeding" stream to reconstruct the same state and continue where a previous encoder / decoder left off.
Of course if you create a single gob.Encoder, you may use it to serialize as many values as you'd like to.
Also you can create a gob.Encoder and write to a file, and then later create a new gob.Encoder, and append to the same file, but you must use 2 gob.Decoders to read those values, exactly matching the encoding process.
As a demonstration, let's follow an example. This example will write to an in-memory buffer (bytes.Buffer). 2 subsequent encoders will write to it, then we will use 2 subsequent decoders to read the values. We'll write values of this struct:
type Point struct {
X, Y int
}
For short, compact code, I use this "error handler" function:
func he(err error) {
if err != nil {
panic(err)
}
}
And now the code:
const n, m = 3, 2
buf := &bytes.Buffer{}
e := gob.NewEncoder(buf)
for i := 0; i < n; i++ {
he(e.Encode(&Point{X: i, Y: i * 2}))
}
e = gob.NewEncoder(buf)
for i := 0; i < m; i++ {
he(e.Encode(&Point{X: i, Y: 10 + i}))
}
d := gob.NewDecoder(buf)
for i := 0; i < n; i++ {
var p *Point
he(d.Decode(&p))
fmt.Println(p)
}
d = gob.NewDecoder(buf)
for i := 0; i < m; i++ {
var p *Point
he(d.Decode(&p))
fmt.Println(p)
}
Output (try it on the Go Playground):
&{0 0}
&{1 2}
&{2 4}
&{0 10}
&{1 11}
Note that if we'd use only 1 decoder to read all the values (looping until i < n + m, we'd get the same error message you posted in your question when the iteration reaches n + 1, because the subsequent data is not a serialized Point, but the start of a new gob stream.
So if you want to stick with the gob package for doing what you want to do, you have to slightly modify, enhance your encoding / decoding process. You have to somehow mark the boundaries when a new encoder is used (so when decoding, you'll know you have to create a new decoder to read subsequent values).
You may use different techniques to achieve this:
You may write out a number, a count before you proceed to write values, and this number would tell how many values were written using the current encoder.
If you don't want to or can't tell how many values will be written with the current encoder, you may opt to write out a special end-of-encoder value when you don't write more values with the current encoder. When decoding, if you encounter this special end-of-encoder value, you'll know you have to create a new decoder to be able to read more values.
Some things to note here:
The gob package is most efficient, most compact if only a single encoder is used, because each time you create and use a new encoder, the type specifications will have to be re-transmitted, causing more overhead, and making the encoding / decoding process slower.
You can't seek in the data stream, you can only decode any value if you read the whole file from the beginning up until the value you want. Note that this somewhat applies even if you use other formats (such as JSON or XML).
If you want seeking functionality, you'd need to manage an index file separately, which would tell at which positions new encoders / decoders start, so you could seek to that position, create a new decoder, and start reading values from there.
Check a related question: Efficient Go serialization of struct to disk
In addition to the above, I suggest using an intermediate structure to exclude the gob header:
package main
import (
"bytes"
"encoding/gob"
"fmt"
"io"
"log"
)
type Point struct {
X, Y int
}
func main() {
buf := new(bytes.Buffer)
enc, _, err := NewEncoderWithoutHeader(buf, new(Point))
if err != nil {
log.Fatal(err)
}
enc.Encode(&Point{10, 10})
fmt.Println(buf.Bytes())
}
type HeaderSkiper struct {
src io.Reader
dst io.Writer
}
func (hs *HeaderSkiper) Read(p []byte) (int, error) {
return hs.src.Read(p)
}
func (hs *HeaderSkiper) Write(p []byte) (int, error) {
return hs.dst.Write(p)
}
func NewEncoderWithoutHeader(w io.Writer, sample interface{}) (*gob.Encoder, *bytes.Buffer, error) {
hs := new(HeaderSkiper)
hdr := new(bytes.Buffer)
hs.dst = hdr
enc := gob.NewEncoder(hs)
// Write sample with header info
if err := enc.Encode(sample); err != nil {
return nil, nil, err
}
// Change writer
hs.dst = w
return enc, hdr, nil
}
func NewDecoderWithoutHeader(r io.Reader, hdr *bytes.Buffer, dummy interface{}) (*gob.Decoder, error) {
hs := new(HeaderSkiper)
hs.src = hdr
dec := gob.NewDecoder(hs)
if err := dec.Decode(dummy); err != nil {
return nil, err
}
hs.src = r
return dec, nil
}
Additionally to great icza answer, you could use the following trick to append to a gob file with already written data: when append the first time write and discard the first encode:
Create the file Encode gob as usual (first encode write headers)
Close file
Open file for append
Using and intermediate writer encode dummy struct (which write headers)
Reset the writer
Encode gob as usual (writes no headers)
Example:
package main
import (
"bytes"
"encoding/gob"
"fmt"
"io"
"io/ioutil"
"log"
"os"
)
type Record struct {
ID int
Body string
}
func main() {
r1 := Record{ID: 1, Body: "abc"}
r2 := Record{ID: 2, Body: "def"}
// encode r1
var buf1 bytes.Buffer
enc := gob.NewEncoder(&buf1)
err := enc.Encode(r1)
if err != nil {
log.Fatal(err)
}
// write to file
err = ioutil.WriteFile("/tmp/log.gob", buf1.Bytes(), 0600)
if err != nil {
log.Fatal()
}
// encode dummy (which write headers)
var buf2 bytes.Buffer
enc = gob.NewEncoder(&buf2)
err = enc.Encode(Record{})
if err != nil {
log.Fatal(err)
}
// remove dummy
buf2.Reset()
// encode r2
err = enc.Encode(r2)
if err != nil {
log.Fatal(err)
}
// open file
f, err := os.OpenFile("/tmp/log.gob", os.O_WRONLY|os.O_APPEND, 0600)
if err != nil {
log.Fatal(err)
}
// write r2
_, err = f.Write(buf2.Bytes())
if err != nil {
log.Fatal(err)
}
// decode file
data, err := ioutil.ReadFile("/tmp/log.gob")
if err != nil {
log.Fatal(err)
}
var r Record
dec := gob.NewDecoder(bytes.NewReader(data))
for {
err = dec.Decode(&r)
if err == io.EOF {
break
}
if err != nil {
log.Fatal(err)
}
fmt.Println(r)
}
}
If I create a variable inside an if block, I can't use it later on. If I create a variable before the if block and the if block evaluates to false, I get a "variable created and not used" error.
I'm certain that this is by design and I'm trying to do something I shouldn't, but the logic behind what I'm trying to do makes sense to me. If there is page info in the url, I want to use it in a sql statement later on, but if there isn't page info in the url, then I don't need those variables.
http://pastebin.com/QqwpdM1d
Edit: here's the code:
var pageID string
var offset int
if len(r.URL.Path) > len("/page/") {
pageID := r.URL.Path[len("/page/"):]
offset, err := strconv.Atoi(pageID)
if err != nil {
log.Fatal(err)
}
}
conn := "..."
db, err := sql.Open("mysql", conn)
defer db.Close()
if err != nil {
log.Fatal(err)
}
var rows *sql.Rows
if offset != 0 {
// ...
}
If you declare a variable before an if statement and you use it inside the if block, it doesn't matter what the condition evaluates to, it is not a compile time error.
The error in you case is that you don't use the declared variable inside the if block. Your code:
var pageID string
var offset int
if len(r.URL.Path) > len("/page/") {
pageID := r.URL.Path[len("/page/"):]
offset, err := strconv.Atoi(pageID)
if err != nil {
log.Fatal(err)
}
}
Inside the if you are not assigning to the previously declared pageID, but you are using short variable declaration := which creates a new variable, shadowing the one created in the outer block, and it is in effect only at the end of the if block (its scope ends at the end of the innermost containing block).
Solution is (what you most likely wanted to) simply use assignment = (which assigns value to the existing variable):
pageID = r.URL.Path[len("/page/"):]
To make it understand, see this example:
i := 1
fmt.Println("Outer:", i)
{
i := 2 // Short var decl: creates a new i, shadowing the outer
fmt.Println("Inner:", i)
}
fmt.Println("Outer again:", i)
Output (try it on the Go Playground):
Outer: 1
Inner: 2
Outer again: 1
I'm trying to read all the rows from a table on a SQL server and store them in string slices to use for later. The issue I'm running into is that the previously scanned rows are getting overwritten every time I scan a new row, even though I've converted all the mutable byte slices to immutable strings and saved the result slices to another slice. Here is the code I'm using:
rawResult := make([]interface{}, len(cols)) // holds anything that could be in a row
result := make([]string, len(cols)) // will hold all row elements as strings
var results [][]string // will hold all the result string slices
dest := make([]interface{}, len(cols)) // temporary, to pass into scan
for i, _ := range rawResult {
dest[i] = &rawResult[i] // fill dest with pointers to rawResult to pass into scan
}
for rows.Next() { // for each row
err = rows.Scan(dest...) // scan the row
if err != nil {
log.Fatal("Failed to scan row", err)
}
for i, raw := range rawResult { // for each scanned byte slice in a row
switch rawtype := raw.(type){ // determine type, convert to string
case int64:
result[i] = strconv.FormatInt(raw.(int64), 10)
case float64:
result[i] = strconv.FormatFloat(raw.(float64), 'f', -1, 64)
case bool:
result[i] = strconv.FormatBool(raw.(bool))
case []byte:
result[i] = string(raw.([]byte))
case string:
result[i] = raw.(string)
case time.Time:
result[i] = raw.(time.Time).String()
case nil:
result[i] = ""
default: // shouldn't actually be reachable since all types have been covered
log.Fatal("Unexpected type %T", rawtype)
}
}
results = append(results, result) // append the result to our slice of results
}
I'm sure this has something to do with the way Go handles variables and memory, but I can't seem to fix it. Can somebody explain what I'm not understanding?
You should create new slice for each data row. Notice, that a slice has a pointer to underlying array, so every slice you added into results have same pointer on actual data array. That's why you have faced with that behaviour.
When you create a slice using func make() it return a type (Not a pointer to type). But it does not allocate new memory each time a element is reassigned. Hence
result := make([]string, 5)
will have fix memory to contain 5 strings. when a element is reassigned, it occupies same memory as before hence overriding the old value.
Hopefully following example make things clear.
http://play.golang.org/p/3w2NtEHRuu
Hence in your program you are changing the content of the same memory and appending it again and again. To solve this problem you should create your result slice inside the loop.
Move result := make([]string, len(cols)) into your for loop that loops over the available rows.