How do I create a structure for a dynamic SQL query? - sql

In my Golang application I make SQL request to the database. Usually, in the SQL query, I specify the columns that I want to get from the table and create a structure based on it. You can see an example of the working code below.
QUESTION:
What should I do if I don't know the number and name of columns in the table? For example, I make the SQL request like SELECT * from filters; instead of SELECT FILTER_ID, FILTER_NAME FROM filters;. How do I create a structure in this case?
var GetFilters = func(responseWriter http.ResponseWriter, request *http.Request) {
rows, err := database.ClickHouse.Query("SELECT * FROM filters;"); if err != nil {
fmt.Println(err)
return
}
defer rows.Close()
columns, err := rows.Columns(); if err != nil {
fmt.Println(err)
return
}
filters := make([]interface{}, len(columns))
for i, _ := range columns {
filters[i] = new(sql.RawBytes)
}
for rows.Next() {
if err = rows.Scan(filters...); err != nil {
fmt.Println(err)
return
}
}
utils.Response(responseWriter, http.StatusOK, filters)
}

Well, finally I found the solution. As you can see from the code below first I make SQL request where I do not specify the name of the columns. Then I take information about columns by ColumnTypes() function. This function returns column information such as column type, length and nullable. Next I will learn the name and type of columns, fill interface with these data:
for i, column := range columns {
object[column.Name()] = reflect.New(column.ScanType()).Interface()
values[i] = object[column.Name()]
}
The full code which I use looks like this:
var GetFilters = func(responseWriter http.ResponseWriter, request *http.Request) {
rows, err := database.ClickHouse.Query("SELECT * FROM table_name;"); if err != nil {
fmt.Println(err)
return
}
defer rows.Close()
var objects []map[string]interface{}
for rows.Next() {
columns, err := rows.ColumnTypes(); if err != nil {
fmt.Println(err)
return
}
values := make([]interface{}, len(columns))
object := map[string]interface{}{}
for i, column := range columns {
object[column.Name()] = reflect.New(column.ScanType()).Interface()
values[i] = object[column.Name()]
}
if err = rows.Scan(values...); err != nil {
fmt.Println(err)
return
}
objects = append(objects, object)
}
utils.Response(responseWriter, http.StatusOK, objects)
}

Use the USER_TAB_COLUMNS table to get the list of columns in the executing table query store it an array or collection. later execute the query and Scan the columns that you already know from the previous Query.

Related

GoLang rows.Scan() messing up my order. How do I keep the order?

So basically I got an Array filled with 5 IDs. First the ProjectId and 4 ProductIds.
[projectId1 productid1 productid2 productid3 productid4]
I use a custom Query to get these Products by Id.
It looks like this:
func (q *Queries) GetProductsByIdsAndProjektId(ctx context.Context, arg GetProductsByIdsAndProjektIdParams) ([]Products, error) {
stmt := createGetProductsByIdsAndProjektId(len(arg.ProductIds))
args := make([]interface{}, len(arg.ProductIds)+1)
args[0] = arg.ProjektID
for i, id := range arg.ProductIds {
args[i+1] = id
}
rows, err := q.db.QueryContext(ctx, stmt, args...)
if err != nil {
return nil, err
}
defer rows.Close()
var items []Products
for rows.Next() {
var i Product
if err := rows.Scan(
&i.ID,
&i.ProjektID,
&i.Name,
&i.Price,
); err != nil {
return nil, err
}
items = append(items, i)
}
But when I log.Info(i) it scans the ids in a different order.
for example the items array will look like this in the end
[product4data product2data product3data product1data]
It is always the same order but not the one given to it with args. Is there a way to force rows.Next() -> rows.Scan() from the first id in an array to the last, so the items array will have all data for the 4 products in the same order the ids were given? So it should be
[product1Data product2Data product3Data product4Data]
instead.

Not in condition using gorm in golang - dynamic not in condition for select statement

I have two databases Booking and Room. Booking has roomid as one of its field. I wrote a select statement which saves the rows retrieved in result variable as stated below.
var result models.Booking
rows, err := utils.DB.Model(&currRequest).Where("check_in BETWEEN ? AND ? AND check_out BETWEEN ? AND ?", currRequest.CheckIn, currRequest.CheckOut, currRequest.CheckIn, currRequest.CheckOut).Select("room_id").Rows()
for rows.Next() {
utils.DB.ScanRows(rows, &result)
fmt.Println(result.RoomID)
}
Now my result.roomid has values of roomids that satisfy the select statement from the bookings table
My result variable may have multiple room id values. I am able to retrieve the roomid values by looping through the result variable. Now I have to check in my main room database called Room and get those room ids that are not in the result struct. By using the below statement, I am only able to access the first value in result.roomid so the not in condition only considers the first values in result.roomid. How do I do the not in condition for all the values in result.roomid?
rows, err := utils.DB.Model(&models.Room{}).Not(result.RoomID).Select("room_id").Rows()
Full code:
package handlers
import (
"encoding/json"
"fmt"
"net/http"
"server/models"
"server/utils"
"strings"
)
func AvailableRoomsHandler(w http.ResponseWriter, r *http.Request) {
currRequest := &models.Booking{}
err := json.NewDecoder(r.Body).Decode(currRequest)
//check if a valid request has been sent from front end
if err != nil {
//fmt.Println(err)
var resp = map[string]interface{}{"status": false, "message": "Invalid json request"}
json.NewEncoder(w).Encode(resp)
return
}
noOfRoomsOccupied := 0
var notinrooms string
// Use GORM API build SQL
//check if any rooms are available which havent been booked yet in the requested check-in and check-out dates
var result models.Booking
rows, err := utils.DB.Model(&currRequest).Where("check_in BETWEEN ? AND ? AND check_out BETWEEN ? AND ?", currRequest.CheckIn, currRequest.CheckOut, currRequest.CheckIn, currRequest.CheckOut).Select("room_id").Rows()
if err != nil {
json.NewEncoder(w).Encode(err)
fmt.Print("error occured in select statement")
return
} else {
defer rows.Close()
for rows.Next() {
noOfRoomsOccupied = noOfRoomsOccupied + 1
utils.DB.ScanRows(rows, &result)
fmt.Println(result.RoomID)
notinrooms = notinrooms + result.RoomID + ","
}
notinrooms = strings.TrimRight(notinrooms, ",")
fmt.Println(notinrooms)
//calculate the number of rooms in the database
//rows, err := utils.DB.Model(&models.Room{}).Select("room_id").Rows()
res := utils.DB.Find(&models.Room{})
rowcount := res.RowsAffected
fmt.Println(rowcount)
if noOfRoomsOccupied == int(rowcount) {
var resp = map[string]interface{}{"status": false, "message": "no rooms available in the specified time period"}
json.NewEncoder(w).Encode(resp)
return
} else {
noOfRooms := (currRequest.NoOfGuests + currRequest.NoOfChildren) / 2
if (currRequest.NoOfGuests+currRequest.NoOfChildren)%2 == 1 {
noOfRooms = noOfRooms + 1
}
if int(noOfRooms) < int(rowcount)-noOfRoomsOccupied {
fmt.Println("number of rooms to book : ", noOfRooms)
//assign rooms if available
var roomids models.Room
//rows, err := utils.DB.Model(&models.Room{}).Not(result.RoomID).Select("room_id").Rows()
fmt.Println("rooms that can be booked")
rows, err := utils.DB.Model(&models.Room{}).Not(result.RoomID).Select("room_id").Rows()
//rows, err := utils.DB.Model(&models.Room{}).Not([]string{notinrooms}).Select("room_id").Rows()
//map[string]interface{}{"name": []string{"jinzhu", "jinzhu 2"}}
if err != nil {
json.NewEncoder(w).Encode(err)
fmt.Print("error occured in select statement to get room ids to assign")
return
} else {
defer rows.Close()
for rows.Next() {
noOfRoomsOccupied = noOfRoomsOccupied + 1
utils.DB.ScanRows(rows, &roomids)
fmt.Println(roomids.RoomID)
}
}
var success = map[string]interface{}{"message": "Select statement worked well"}
json.NewEncoder(w).Encode(success)
return
}
}
}
}
When I do result.roomid, it only gives the first room id and eliminates only that room id in the above select statement. How do I eliminate all the room ids I found in the booking table in the rooms table data?
I tried splitting the result.roomid values and tried to form a string and gave it in the select statement but that didn't work. I tried looping through every result.roomid and ran the not in a statement but that will not make any sense.
With this code:
var result models.Booking
rows, err := utils.DB.Model(&currRequest).Where("check_in BETWEEN ? AND ? AND check_out BETWEEN ? AND ?", currRequest.CheckIn, currRequest.CheckOut, currRequest.CheckIn, currRequest.CheckOut).Select("room_id").Rows()
if err != nil {
json.NewEncoder(w).Encode(err)
fmt.Print("error occured in select statement")
return
} else {
defer rows.Close()
for rows.Next() {
noOfRoomsOccupied = noOfRoomsOccupied + 1
utils.DB.ScanRows(rows, &result)
//rest of the code
}
}
you only get one row of potentially many rows from the result set. To get all the rows and extract their values, you should use []models.Booking.
result := []models.Booking{}
rows, err := utils.DB.Model(&currRequest).Where("check_in BETWEEN ? AND ? AND check_out BETWEEN ? AND ?", currRequest.CheckIn, currRequest.CheckOut, currRequest.CheckIn, currRequest.CheckOut).Select("room_id").Rows()
if err != nil {
json.NewEncoder(w).Encode(err)
fmt.Print("error occured in select statement")
return
} else {
defer rows.Close()
for rows.Next() {
var b models.Booking
noOfRoomsOccupied = noOfRoomsOccupied + 1
utils.DB.ScanRows(rows, &b)
result = append(result, b)
//rest of the code
}
}
However, since you only need roomid anyway, you could make it easier by using []uint (assuming roomid is of type uint).
result := []uint{}
rows, err := utils.DB.Model(&currRequest).Where("check_in BETWEEN ? AND ? AND check_out BETWEEN ? AND ?", currRequest.CheckIn, currRequest.CheckOut, currRequest.CheckIn, currRequest.CheckOut).Select("room_id").Rows()
if err != nil {
json.NewEncoder(w).Encode(err)
fmt.Print("error occured in select statement")
return
} else {
defer rows.Close()
for rows.Next() {
var rid uint
noOfRoomsOccupied = noOfRoomsOccupied + 1
utils.DB.ScanRows(rows, &rid)
result = append(result, rid)
//rest of the code
}
}
With the result being of type []uint, it would be easier to use it with the Not function (per documentation):
rows, err := utils.DB.Model(&models.Room{}).Not(result).Select("room_id").Rows()

SQL Next not advancing cursor

I have a function that I used to iterate over a result set from a query:
func readRows(rows *sql.Rows, translator func(*sql.Rows) error) error {
defer rows.Close()
// Iterate over each row in the rows and scan each; if an error occurs then return
for shouldScan := rows.Next(); shouldScan; {
if err := translator(rows); err != nil {
return err
}
}
// Check if the rows had an error; if they did then return them. Otherwise,
// close the rows and return an error if the close function fails
if err := rows.Err(); err != nil {
return err
}
return nil
}
The translator function is primarily responsible for calling Scan on the *sql.Rows object. An example of this is:
readRows(rows, func(scanner *sql.Rows) error {
var entry gopb.TestObject
// Embed the variables into a list that we can use to pull information out of the rows
scanned := []interface{}{...}
if err := scanner.Scan(scanned...); err != nil {
return err
}
entries = append(entries, &entry)
return nil
})
I wrote a unit test for this code:
// Create the SQL mock and the RDS reqeuster
db, mock, _ := sqlmock.New()
requester := Requester{conn: db}
defer db.Close()
// Create the rows we'll use for testing the query
rows := sqlmock.NewRows([]string{"id", "data"}).
AddRow(0, "data")
// Verify the command order for the transaction
mock.ExpectBegin()
mock.ExpectQuery(regexp.QuoteMeta("SELECT `id`, `data`, FROM `data`")).WillReturnRows(rows)
mock.ExpectRollback()
// Attempt to get the data
data, err := requester.GetData(context.TODO())
However, it appears that Next is being called infinitely. I'm not sure if this is an sqlmock issue or an issue with my code. Any help would be appreciated.

Bulk insert copy sql table with golang

For the context, I'm new to go and I'm creating a program that can copy tables from Oracle to MySQL.
I use database/sql go package, so I assume it can be used for migrating any kind of database.
To simplify my question I'm coping on the same MySQL database table name world.city to world.city_copy2.
with my following code, I ended up with the same last values in all the rows in the table :-(
do I somehow need to read through all the values inside the loop? what is the efficient way to do that?
package main
import (
"database/sql"
"fmt"
"strings"
_ "github.com/go-sql-driver/mysql"
)
const (
user = "user"
pass = "testPass"
server = "localhost"
)
func main() {
fmt.Print("test")
conStr := fmt.Sprintf("%s:%s#tcp(%s)/world", user, pass, server)
db, err := sql.Open("mysql", conStr)
if err != nil {
panic(err.Error())
}
defer db.Close()
err = db.Ping()
if err != nil {
panic(err.Error())
}
rows, err := db.Query("SELECT * FROM city")
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
columns, err := rows.Columns()
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
// Make a slice for the values
values := make([]sql.RawBytes, len(columns))
// rows.Scan wants '[]interface{}' as an argument, so we must copy the
// references into such a slice
scanArgs := make([]interface{}, len(values))
for i := range values {
scanArgs[i] = &values[i]
}
// that string will be generated according to len of columns
placeHolders := "( ?, ?, ?, ?, ? )"
// slice will contain all the values at the end
bulkValues := []interface{}{}
valueStrings := make([]string, 0)
for rows.Next() {
// get RawBytes from data
err = rows.Scan(scanArgs...)
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
valueStrings = append(valueStrings, placeHolders)
bulkValues = append(bulkValues, scanArgs...)
//
}
stmStr := fmt.Sprintf("INSERT INTO city_copy2 VALUES %s", strings.Join(valueStrings, ","))
_, err = db.Exec(stmStr, bulkValues...)
if err != nil {
panic(err.Error())
}
}
I have checked out the docs of the library, and it seems that the problem here is that bulkValues keeps the address of the pointer so when scanArgs changes, bulkValues also changes to latest value of that scanArgs.
You need to use the values variable to get the values like below:
func main() {
fmt.Print("test")
conStr := fmt.Sprintf("%s:%s#tcp(%s)/soverflow", user, pass, server)
db, err := sql.Open("mysql", conStr)
if err != nil {
panic(err.Error())
}
defer db.Close()
err = db.Ping()
if err != nil {
panic(err.Error())
}
rows, err := db.Query("SELECT * FROM city")
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
columns, err := rows.Columns()
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
// Make a slice for the values
values := make([]sql.RawBytes, len(columns))
// rows.Scan wants '[]interface{}' as an argument, so we must copy the
// references into such a slice
scanArgs := make([]interface{}, len(values))
for i := range values {
scanArgs[i] = &values[i]
}
// that string will be generated according to len of columns
placeHolders := "( ?, ?, ?, ?, ? )"
// slice will contain all the values at the end
bulkValues := []interface{}{}
valueStrings := make([]string, 0)
// make an interface to keep the record's value
record := make([]interface{}, len(columns))
for rows.Next() {
// get RawBytes from data
err = rows.Scan(scanArgs...)
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
valueStrings = append(valueStrings, placeHolders)
for i, col := range values {
// you need to be carefull with the datatypes here
// check out the docs for details on here
record[i] = string(value)
}
bulkValues = append(bulkValues, record...)
}
stmStr := fmt.Sprintf("INSERT INTO city_copy2 VALUES %s", strings.Join(valueStrings, ","))
_, err = db.Exec(stmStr, bulkValues...)
if err != nil {
panic(err.Error())
}
}
You can also find the example of the documentation here.
Note: There might be more efficient ways to copy database from psql to mysql but this answer only gives a quick solution for this particular issue that you are having.

sql: scan row(s) with unknown number of columns (select * from ...)

I have a table t containing a lot of columns, and my sql is like this: select * from t. Now I only want to scan one column or two from the wide returned row set. However, the sql.Scan accepts dest ...interface{} as arguments. Does it mean I have to scan everything and use only the column I needed?
I know I could change the sql from select * to select my_favorite_rows, however, in this case, I have no way to change the sql.
You can make use of Rows.Columns, e.g.
package main
import (
"database/sql"
"fmt"
"github.com/lib/pq"
)
type Vehicle struct {
Id int
Name string
Wheels int
}
// VehicleCol returns a reference for a column of a Vehicle
func VehicleCol(colname string, vh *Vehicle) interface{} {
switch colname {
case "id":
return &vh.Id
case "name":
return &vh.Name
case "wheels":
return &vh.Wheels
default:
panic("unknown column " + colname)
}
}
func panicOnErr(err error) {
if err != nil {
panic(err.Error())
}
}
func main() {
conn, err := pq.ParseURL(`postgres://docker:docker#172.17.0.2:5432/pgsqltest?schema=public`)
panicOnErr(err)
var db *sql.DB
db, err = sql.Open("postgres", conn)
panicOnErr(err)
var rows *sql.Rows
rows, err = db.Query("select * from vehicle")
panicOnErr(err)
// get the column names from the query
var columns []string
columns, err = rows.Columns()
panicOnErr(err)
colNum := len(columns)
all := []Vehicle{}
for rows.Next() {
vh := Vehicle{}
// make references for the cols with the aid of VehicleCol
cols := make([]interface{}, colNum)
for i := 0; i < colNum; i++ {
cols[i] = VehicleCol(columns[i], &vh)
}
err = rows.Scan(cols...)
panicOnErr(err)
all = append(all, vh)
}
fmt.Printf("%#v\n", all)
}
For unknown length of columns but if you're sure about their type,
cols, err := rows.Columns()
if err != nil {
log.Fatal(err.Error())
}
colLen := len(cols)
vals := make([]interface{}, colLen)
for rows.Next() {
for i := 0; i < len(colLen); i++ {
vals[i] = new(string)
}
err := rows.Scan(vals...)
if err != nil {
log.Fatal(err.Error()) // if wrong type
}
fmt.Printf("Column 1: %s\n", *(vals[0].(*string))) // will panic if wrong type
}
PS: Not recommended for prod