Go SQL scanned rows getting overwritten - sql

I'm trying to read all the rows from a table on a SQL server and store them in string slices to use for later. The issue I'm running into is that the previously scanned rows are getting overwritten every time I scan a new row, even though I've converted all the mutable byte slices to immutable strings and saved the result slices to another slice. Here is the code I'm using:
rawResult := make([]interface{}, len(cols)) // holds anything that could be in a row
result := make([]string, len(cols)) // will hold all row elements as strings
var results [][]string // will hold all the result string slices
dest := make([]interface{}, len(cols)) // temporary, to pass into scan
for i, _ := range rawResult {
dest[i] = &rawResult[i] // fill dest with pointers to rawResult to pass into scan
}
for rows.Next() { // for each row
err = rows.Scan(dest...) // scan the row
if err != nil {
log.Fatal("Failed to scan row", err)
}
for i, raw := range rawResult { // for each scanned byte slice in a row
switch rawtype := raw.(type){ // determine type, convert to string
case int64:
result[i] = strconv.FormatInt(raw.(int64), 10)
case float64:
result[i] = strconv.FormatFloat(raw.(float64), 'f', -1, 64)
case bool:
result[i] = strconv.FormatBool(raw.(bool))
case []byte:
result[i] = string(raw.([]byte))
case string:
result[i] = raw.(string)
case time.Time:
result[i] = raw.(time.Time).String()
case nil:
result[i] = ""
default: // shouldn't actually be reachable since all types have been covered
log.Fatal("Unexpected type %T", rawtype)
}
}
results = append(results, result) // append the result to our slice of results
}
I'm sure this has something to do with the way Go handles variables and memory, but I can't seem to fix it. Can somebody explain what I'm not understanding?

You should create new slice for each data row. Notice, that a slice has a pointer to underlying array, so every slice you added into results have same pointer on actual data array. That's why you have faced with that behaviour.

When you create a slice using func make() it return a type (Not a pointer to type). But it does not allocate new memory each time a element is reassigned. Hence
result := make([]string, 5)
will have fix memory to contain 5 strings. when a element is reassigned, it occupies same memory as before hence overriding the old value.
Hopefully following example make things clear.
http://play.golang.org/p/3w2NtEHRuu
Hence in your program you are changing the content of the same memory and appending it again and again. To solve this problem you should create your result slice inside the loop.

Move result := make([]string, len(cols)) into your for loop that loops over the available rows.

Related

Problem reading uniqueidentifier from SQL response

I have tried to find the solution for this problem, but keep running my head at the wall with this one.
This function is part of a Go SQL wrapper, and the function getJSON is called to extract the informations from the sql response.
The problem is, that the id parameter becomes jibberish and does not match the desired response, all the other parameters read are correct thou, so this really weirds me out.
Thank you in advance, for any attempt at figurring this problem out, it is really appreciated :-)
func getJSON(rows *sqlx.Rows) ([]byte, error) {
columns, err := rows.Columns()
rawResult := make([][]byte, len(columns))
dest := make([]interface{}, len(columns))
for i := range rawResult {
dest[i] = &rawResult[i]
}
defer rows.Close()
var results []map[string][]byte
for rows.Next() {
result := make(map[string][]byte, len(columns))
rows.Scan(dest...)
for i, raw := range rawResult {
if raw == nil {
result[columns[i]] = []byte("")
} else {
result[columns[i]] = raw
fmt.Println(columns[i] + " : " + string(raw))
}
}
results = append(results, result)
}
s, err := json.Marshal(results)
if err != nil {
panic(err)
}
rows.Close()
return s, nil
}
An example of the response, taking from the terminal:
id : r�b�X��M���+�2%
name : cat
issub : false
Expected result:
id : E262B172-B158-4DEF-8015-9BA12BF53225
name : cat
issub : false
That's not about type conversion.
An UUID (of any type; presently there are four) is defined to be a 128-bit-long lump of bytes, which is 128/8=16 bytes.
This means any bytes — not necessarily printable.
What you're after, is a string representation of an UUID value, which
Separates certain groups of bytes using dashes.
Formats each byte in these groups using hexadecimal (base-16) representation.
Since base-16 positional count represents values 0 through 15 using a single digit ('0' through 'F'), a single byte is represented by two such digits — a digit per each group of 4 bits.
I think any sensible UUID package should implement a "decoding" function/method which would produce a string representation out of those 16 bytes.
I have picked a random package produced by performing this search query, and it has github.com/google/uuid.FromBytes which produces an UUID from a given byte slice, and the type of the resulting value implements the String() method which produces what you're after.

Three different ways to instantiate Arrays in AssemblyScript

I'm writing a smart contract and want to use Arrays to manipulate data, but looking at the AssemblyScript docs, I'm not sure the best way to proceed.
It seems fine to me to just use:
let testData:string[] = []
but when I consulted the assemblyscript docs, there are three recommended ways to create an Array:
// The Array constructor implicitly sets `.length = 10`, leading to an array of
// ten times `null` not matching the value type `string`. So, this will error:
var arr = new Array<string>(10);
// arr.length == 10 -> ERROR
// To account for this, the .create method has been introduced that initializes
// the backing capacity normally but leaves `.length = 0`. So, this will work:
var arr = Array.create<string>(10);
// arr.length == 0 -> OK
// When pushing to the latter array or subsequently inserting elements into it,
// .length will automatically grow just like one would expect, with the backing
// buffer already properly sized (no resize will occur). So, this is fine:
for (let i = 0; i < 10; ++i) arr[i] = "notnull";
// arr.length == 10 -> OK
When would I want to use one type of instantiation over another? Why wouldn't I just always use the version I presented in the beginning?
Nothing wrong with the array literal approach. It is basically equivalent to
let testData = new Array<string>();
However, sometimes you know what the length of the array should be and in such cases, preallocating the memory using Array.create is more efficient.
UPDATE
With this PR Array.create deprecated and should not be used anymore.
OLD ANSWER
let testData:string[] = []
semantically the same as
let testData = new Array<string>()
AssemblyScript doesn't support preallocated sparse arrays (arrays with holes) for reference elements which not explicitly declared as nullable like:
let data = new Array<string>(10);
data[9] = 1; // will be compile error
Instead you could use:
let data = new Array<string | null>(10);
assert(data.length == 10); // ok
assert(data[0] === null); // ok
or Array.create but in this case your length will be zero. Array.create is actually just reserve capacity for backing buffer.
let data = Array.create<string>(10);
assert(data.length == 0); // true
For plain (non-reference) types you could use usual way without care about nullabe or creating array via Array.create:
let data = new Array<i32>(10);
assert(data.length == 10); // ok
assert(data[0] == 0); // ok

Specman/e list of lists (multidimensional array)

How can I create a fixed multidimensional array in Specman/e using varibles?
And then access individual elements or whole rows?
For example in SystemVerilog I would have:
module top;
function automatic my_func();
bit [7:0] arr [4][8]; // matrix: 4 rows, 8 columns of bytes
bit [7:0] row [8]; // array : 8 elements of bytes
row = '{1, 2, 3, 4, 5, 6, 7, 8};
$display("Array:");
foreach (arr[i]) begin
arr[i] = row;
$display("row[%0d] = %p", i, row);
end
$display("\narr[2][3] = %0d", arr[2][3]);
endfunction : my_func
initial begin
my_func();
end
endmodule : top
This will produce this output:
Array:
row[0] = '{'h1, 'h2, 'h3, 'h4, 'h5, 'h6, 'h7, 'h8}
row[1] = '{'h1, 'h2, 'h3, 'h4, 'h5, 'h6, 'h7, 'h8}
row[2] = '{'h1, 'h2, 'h3, 'h4, 'h5, 'h6, 'h7, 'h8}
row[3] = '{'h1, 'h2, 'h3, 'h4, 'h5, 'h6, 'h7, 'h8}
arr[2][3] = 4
Can someone rewrite my_func() in Specman/e?
There are no fixed arrays in e. But you can define a variable of a list type, including a multi-dimensional list, such as:
var my_md_list: list of list of my_type;
It is not the same as a multi-dimensional array in other languages, in the sense that in general each inner list (being an element of the outer list) may be of a different size. But you still can achieve your purpose using it. For example, your code might be rewritten in e more or less like this:
var arr: list of list of byte;
var row: list of byte = {1;2;3;4;5;6;7;8};
for i from 0 to 3 do {
arr.add(row.copy());
print arr[i];
};
print arr[2][3];
Notice the usage of row.copy() - it ensures that each outer list element will be a copy of the original list.
If we don't use copy(), we will get a list of many pointers to the same list. This may also be legitimate, depending on the purpose of your code.
In case of a field (as opposed to a local variable), it is also possible to declare it with a given size. This size is, again, not "fixed" and can be modified at run time (by adding or removing items), but it determines the original size of the list upon creation, for example:
struct foo {
my_list[4][8]: list of list of int;
};

How to avoid slice reference to the same memory block

I met a problem when query from Database and tried to insert into a slice(contains some map[string]interface{})
Even I already used make to create a new memory block, the slice seems always mapping to a same memory block.
type DBResult []map[string]interface{}
func ResultRows(rows *sql.Rows, limit int) (DBResult, error) {
cols, err := rows.Columns()
if err != nil {
return nil, err
}
vals := make([]sql.RawBytes, len(cols))
scanArgs := make([]interface{}, len(vals))
for i := range vals {
scanArgs[i] = &vals[i]
}
if limit > QUERY_HARD_LIMIT {
limit = QUERY_HARD_LIMIT
}
res := make(DBResult, 0, limit)
for rows.Next() {
err = rows.Scan(scanArgs...)
m := make(map[string]interface{})
for i := range vals {
m[cols[i]] = vals[i]
}
/* Append m to res */
res = append(res, m)
/* The value of m has been changed */
fmt.Printf("lib: m:\n\n%s\n\n", m)
/* When printing res, always mapping to the same memory block */
fmt.Printf("lib: res:\n\n%s\n\n", res)
}
return res, err
}
The following is the result, you can find the contents of res are the same
m = map[comment:first_comment id:0]
res = [map[id:0 comment:first_comment]]
m = map[id:1 comment:first_comment]
res = [map[id:1 comment:first_comment] map[id:1 comment:first_comment]]
m = map[id:2 comment:first_comment]
res = [map[id:2 comment:first_comment] map[id:2 comment:first_comment] map[id:2 comment:first_comment]]
My expect of res = [map[id:0 comment:first_comment] map[id:1 comment:first_comment] map[id:2 comment:first_comment]]
Thanks for watching
according to the document of Rows (https://golang.org/pkg/database/sql/#Rows.Scan):
Scan copies the columns in the current row into the values pointed at by dest.
If an argument has type *[]byte, Scan saves in that argument a copy of the corresponding data. The copy is owned by the caller and can be modified and held indefinitely. The copy can be avoided by using an argument of type *RawBytes instead; see the documentation for RawBytes for restrictions on its use.
If an argument has type *interface{}, Scan copies the value provided by the underlying driver without conversion. If the value is of type []byte, a copy is made and the caller owns the result.
in your case, you use RawBytes as argument of Scan. That might be the problem you got. Try other type as argument of Scan function.

Allocate uninitialized slice

Is there some way to allocate an uninitialized slice in Go? A frequent pattern is to create a slice of a given size as a buffer, and then only use part of it to receive data. For example:
b := make([]byte, 0x20000) // b is zero initialized
n, err := conn.Read(b)
// do stuff with b[:n]. all of b is zeroed for no reason
This initialization can add up when lots of buffers are being allocated, as the spec states it will default initialize the array on allocation.
You can get non zeroed byte buffers from bufs.Cache.Get (or see CCache for the concurrent safe version). From the docs:
NOTE: The buffer returned by Get is not guaranteed to be zeroed. That's okay for e.g. passing a buffer to io.Reader. If you need a zeroed buffer use Cget.
Technically you could by allocating the memory outside the go runtime and using unsafe.Pointer, but this is definitely the wrong thing to do.
A better solution is to reduce the number of allocations. Move buffers outside loops, or, if you need per goroutine buffers, allocate several of them in a pool and only allocate more when they're needed.
type BufferPool struct {
Capacity int
buffersize int
buffers []byte
lock sync.Mutex
}
func NewBufferPool(buffersize int, cap int) {
ret := new(BufferPool)
ret.Capacity = cap
ret.buffersize = buffersize
return ret
}
func (b *BufferPool) Alloc() []byte {
b.lock.Lock()
defer b.lock.Unlock()
if len(b.buffers) == 0 {
return make([]byte, b.buffersize)
} else {
ret := b.buffers[len(b.buffers) - 1]
b.buffers = b.buffers[0:len(b.buffers) - 1]
return ret
}
}
func (b *BufferPool) Free(buf []byte) {
if len(buf) != b.buffersize {
panic("illegal free")
}
b.lock.Lock()
defer b.lock.Unlock()
if len(b.buffers) < b.Capacity {
b.buffers = append(b.buffers, buf)
}
}