Convert query result from struct to string for another Golang package - sql

I have searched for a solution on the net and in SO, but found nothing that apply to return values. It is a simple sql query with several rows that I want to return. Error handling not included:
func Fetch(query string) (string) {
type User struct{
id string
name string
}
rows, err := db.Query(query)
users := make([]*User, 0)
for rows.Next() {
user := new(User)
err := rows.Scan(&user.id, &user.name)
users = append(users, user)
}
return(users)
}
I get this error when compiling:
cannot use users (type []*User) as type string in return argument
How should I do to get a correct return value?
The expected input is
JD John Doe --OR-- {id:"JD",name:"John Doe"}

Add this to your code:
type userSlice []*User
func (us userSlice) String() string{
var s []string
for _, u := range us {
if u != nil {
s = append(s, fmt.Sprintf("%s %s", u.id, u.name))
}
}
return strings.Join(s, "\n")
}
type User struct{
id string
name string
}
In your Fetch function replace the last return statement like this:
func Fetch(query string) (string) {
// Note that we declare the User type outside the function.
rows, err := db.Query(query)
users := make([]*User, 0)
for rows.Next() {
user := new(User)
err := rows.Scan(&user.id, &user.name)
users = append(users, user)
}
return(userSlice(users).String()) // Replace this line in your code
}

If you have to return a String you can use the encoding/json package to serialize your user object, but you have to use fields that begin with capital letters for them to be exported. See the full example:
import (
"encoding/json"
)
func Fetch(query string) (string) {
type User struct{
Id string // <-- CHANGED THIS LINE
Name string // <-- CHANGED THIS LINE
}
rows, err := db.Query(query)
users := make([]*User, 0)
for rows.Next() {
user := new(User)
err := rows.Scan(&user.id, &user.name)
users = append(users, user)
}
return(ToJSON(users)) // <-- CHANGED THIS LINE
}
func ToJSON(obj interface{}) (string) {
res, err := json.Marshal(obj)
if err != nil {
panic("error with json serialization " + err.Error())
}
return string(res)
}

Related

Filtering and sorting an SQL query to recreate a nested struct

I am new to Go and I'm trying to populate a struct called Reliefworker from an SQL query which I can send as a JSON payload.
Essentially I have a Reliefworker who may be assigned to many communities and the community can consist of multiple regions.
I suspect there is a clever way of doing this other than what I intend which is a primitive solution that will add a sort to the SQL (by Community) and create a function that detects if the community being added is different to the previous one, in which case I would create a new community struct type object to be appended.
type Reliefworker struct {
Name string `json:"name"`
Communities []Community `json:"community"`
Deployment_id string `json:"deployment_id"`
}
type Community struct{
Name string `json:"name"`
community_id string `json:"community_id"`
Regions []Region `json:"regions"`
}
type Region struct{
Name string `json:"name"`
Region_id string `json:"region_id"`
Reconstruction_grant string `json:"reconstruction_grant"`
Currency string `json:"currency"`
}
Currently I have created a struct that reflects what I am actually getting from SQL whilst pondering my next move. Perhaps this might be a good stepping stone instead of attempting an on-the-fly transformation ?
type ReliefWorker_community_region struct {
Deployment_id string
Community_title int
Region_name string
Reconstruction_grant int
}
func GetReliefWorkers(deployment_id string) []Reliefworker {
fmt.Printf("Confirm I have a deployment id:%v\n", deployment_id)
rows, err := middleware.Db.Query("select deployment_id, community_title, region_name, reconstruction_grant WHERE Deployment_id=$1", brand_id)
if err != nil {
return
}
for rows.Next() {
reliefworker := Reliefworker{}
err = rows.Scan(&deployment_id, &community_title, &region_name, &reconstruction_grant)
if err != nil {
return
}
}
rows.Close()
return
}
I think a sort makes a lot of sense, primitive solutions can be the most efficient:
func GetReliefWorkers(deployment_id string) []Reliefworker {
// Added sort to query
q := "select worker_name, community_title, region_name, reconstruction_grant WHERE deployment_id=? ORDER BY community_title"
rows, err := middleware.Db.Query(q, deployment_id)
if err != nil {
return
}
defer rows.Close() // So rows get closed even on an error
c := Community{} // To keep track of the current community
cmatrix := [][]string{[]string{}} // Matrix of communities and workers
communities := []Community{} // List of communities
workers := make(map[string]Reliefworker) // Map of workers
var ccount int // Index of community in lists
for rows.Next() {
w := Reliefworker{Deployment_id: deployment_id}
r := Region{}
var ctitle string // For comparison later
err = rows.Scan(&w.Name, &ctitle, &r.Name, &r.Reconstruction_grant)
if err != nil {
return
}
if ctitle != c.Name {
communities = append(communities, c)
c = Community{}
c.Name = ctitle
ccount++
cmatrix = append(cmatrix, []string{})
}
c.Regions = append(c.Regions, r)
cmatrix[ccount] = append(cmatrix[ccount], w.Name)
workers[w.Name] = w
}
for i, c := range communities {
for _, id := range cmatrix[i] {
w := workers[id] // To avoid error
w.Communities = append(w.Communities, c)
workers[id] = w
}
}
out := []Reliefworker{}
for _, w := range workers {
out = append(out, w)
}
return out
}
Though it might make even more sense to create seperate tables for communities, regions, and workers, then query them all with a JOIN: https://www.w3schools.com/sql/sql_join_inner.asp
UPDATE: Since you only want to retrieve one Reliefworker, would something like this work?
type ReliefWorker struct {
Name string `json:"name"`
Communities []Community `json:"community"`
}
type Community struct {
Name string `json:"name"`
Regions []Region `json:"regions"`
}
type Region struct {
Name string `json:"name"`
Region_id string `json:"region_id"`
Reconstruction_grant int `json:"reconstruction_grant"`
Currency string `json:"currency"`
}
func GetReliefWorkers(deployment_id string) Reliefworker {
reliefworker := Reliefworker{}
communities := make(map[string]Community)
rows, err := middleware.Db.Query("select name, community_title, region_name, region_id, reconstruction_grant WHERE Deployment_id=$1", deployment_id)
if err != nil {
if err == sql.ErrNoRows {
fmt.Printf("No records for ReliefWorker:%v\n", deployment_id)
}
panic(err)
}
defer rows.Close()
for rows.Next() {
c := Community{}
r := Region{}
err = rows.Scan(&reliefworker.Name, &c.Name, &r.Name, &r.Region_id, &r.Reconstruction_grant)
if err != nil {
panic(err)
}
if _, ok := communities[c.Name]; ok {
c = communities[c.Name]
}
c.Regions = append(c.Regions, r)
communities[c.Name] = c
}
for _, c := range commmunities {
reliefworker.Communities = append(reliefworker.Communities, c)
}
return reliefworker
}
Ok, my crude solution doesn't contain a shred of the intelligence #Absentbird demonstrates but I'm here to learn.
#Absentbird I love your use of maps and multidimensional arrays to hold a matrix of communities and workers. I will focus on making this part of my arsenal over the weekend.
I can accept and adapt #Absentbird's solution once I have a solution to why it gives the error "cannot assign to struct field workers[id].Communities in mapcompilerUnaddressableFieldAssign" for the line workers[id].Communities = append(workers[id].Communities, c)
Firstly apologies as I had to correct two things. Firstly I only needed to return ReliefWorkers (not an array of ReliefWorkers). Secondly ReliefWorker struct did not need to contain the Deployment_id since I already knew it.
I am new to Go so I'd really appreciate feedback on what I can do to better leverage the language and write more concise code.
My structs and solution is currently as follows:
type ReliefWorker struct {
Name string `json:"name"`
Communities []Community `json:"community"`
}
type Community struct {
Name string `json:"name"`
Regions []Region `json:"regions"`
}
type Region struct {
Name string `json:"name"`
Region_id string `json:"region_id"`
Reconstruction_grant int `json:"reconstruction_grant"`
Currency string `json:"currency"`
}
type ReliefWorker_community_region struct {
Name string
Community_title string
Region_name string
Reconstruction_grant int
}
func GetReliefWorkers(deployment_id string) Reliefworker {
var reliefworker Reliefworker
var communitiesOnly []string
var name string
var allReliefWorkerData []ReliefWorker_community_region
rows, err := middleware.Db.Query("select name, community_title, region_name, reconstruction_grant WHERE Deployment_id=$1", deployment_id)
for rows.Next() {
reliefWorker_community_region := ReliefWorker_community_region{}
err = rows.Scan(&reliefWorker_community_region.Name, &reliefWorker_community_region.Community_title, &reliefWorker_community_region.Region_name, &reliefWorker_community_region.Reconstruction_grant)
if err != nil {
panic(err)
}
name = reliefWorker_community_region.Name
allReliefWorkerData = append(allReliefWorkerData, reliefWorker_community_region)
communitiesOnly = append(communitiesOnly, reliefWorker_community_region.Community_title) //All communities go in here, even duplicates, will will create a unique set later
}
rows.Close()
if err != nil {
if err == sql.ErrNoRows {
fmt.Printf("No records for ReliefWorker:%v\n", deployment_id)
}
panic(err)
}
var unique []string //Use this to create a unique index of communities
for _, v := range communitiesOnly {
skip := false
for _, u := range unique {
if v == u {
skip = true
break
}
}
if !skip {
unique = append(unique, v)
}
}
fmt.Println(unique)
reliefworker.Name = name
var community Community
var communities []Community
for _, v := range unique {
community.Name = v
communities = append(communities, community)
}
// Go through each record from the database held within allReliefWorkerData and grab every region belonging to a Community, when done append it to Communities and finally append that to ReliefWorker
for j, u := range communities {
var regions []Region
for i, v := range allReliefWorkerData {
if v.Community_title == u.Name {
var region Region
region.Name = v.Region_name
region.Reconstruction_grant = v.Reconstruction_grant
regions = append(regions, region)
}
}
communities[j].Regions = regions
}
reliefworker.Communities = communities
return reliefworker
}

Scan array typed DB field into array/slice in Golang

tl; dr
I have an sql table users which includes an array field. How can I Scan it to variable in golang? My approach:
var id int
var username string
var activites []string
row := db.QueryRow("SELECT id, username, activities FROM users WHERE id = 1")
err := row.Scan(&id, &username, &activites)
Works fine for id, username.
As #mkopriva already pointed out, this can either be accomplished using the StringArray method or with the more flexible Array method (as it accepts an interface as the argument), both found within the "github.com/lib/pq" package.
As an aside, it is also a good practice to use prepared statements.
Full example:
var id int
var username string
var activities []string
sqlStatement := `
SELECT
id,
username,
activities
FROM
users
WHERE
id = $1
`
stmt, err := db.Prepare(sqlStatement)
if err != nil {
// handle err
}
defer stmt.Close()
row := stmt.QueryRow(1)
err = row.Scan(
&id,
&username,
pq.Array(&activities) // used here
)
if err == sql.ErrNoRows {
// handle err
}
if err != nil {
// handle err
}

Checking if a value exists in sqlite db with Go

I'm writing code to manage users in a sqlite database with Go.
I'm trying to check if a username is taken, but my code is ugly.
My table looks like:
id INTEGER PRIMARY KEY AUTOINCREMENT,
username TEXT UNIQUE,
password TEXT
And I check if a username is taken with:
func UserExists(db * sql.DB, username string) bool {
sqlStmt := `SELECT username FROM userinfo WHERE username = ?`
count := 0
rows, err := db.Query(sqlStmt, username)
Check(err)
for rows.Next() { // Can I just check if rows is non-zero somehow?
count++
}
return len(rows) != 0
}
Is there a better query I could use that would tell me if the username value exists in the table in a more staight forward way? Or is there a nicer way to check if rows is non-zero?
Use QueryRow to query at most one row. If the query doesn't return any row, it returns sql.ErrNoRows.
func UserExists(db * sql.DB, username string) bool {
sqlStmt := `SELECT username FROM userinfo WHERE username = ?`
err := db.QueryRow(sqlStmt, username).Scan(&username)
if err != nil {
if err != sql.ErrNoRows {
// a real error happened! you should change your function return
// to "(bool, error)" and return "false, err" here
log.Print(err)
}
return false
}
return true
}
I know this is a bit old, but I don't see any clean answers here. Notice below the if rows.Next() statement which will return a boolean if there are any rows or not:
package main
import (
"database/sql"
_ "github.com/mattn/go-sqlite3"
"log"
)
func main() {
exists, _ := SelectDBRowExists(`SELECT * FROM GEO_VELOCITY_EVENTS WHERE USERNAME='bob'`)
log.Println(exists)
}
func SelectDBRowExists(query string) (bool, error) {
DbConn, err := sql.Open("sqlite3", "/path/to/your/sql.sqlite3")
if err != nil {
return false, err
}
defer DbConn.Close()
err = DbConn.Ping()
if err != nil {
return false, err
}
rows, err := DbConn.Query(query)
if rows.Next() {
return true, nil
} else {
return false, nil
}
defer rows.Close()
return false, nil
}
as long as you are only concerned about the existence of one single information in the Database. I found that this method is quite simpler and efficient.
func emailExists(email string) bool {
row := db.QueryRow("select user_email from users where user_email= ?", email)
checkErr(err)
temp := ""
row.Scan(&temp)
if temp != "" {
return true
}
return false
}
if u noticed am only getting a single row.
the result of my Query gets scanned in a temp Variable.
then I check whether the temp Variable is empty or not.
if it is not empty, true is returned.
i think we can use above function to check that condition is already exist at database.
if helper.UserExists(db, username) {
return username, errors.New("data already exist")
}
because that function returned bool that means if that function is called return will return by default is true.
i hope this usefull.

sql: scan row(s) with unknown number of columns (select * from ...)

I have a table t containing a lot of columns, and my sql is like this: select * from t. Now I only want to scan one column or two from the wide returned row set. However, the sql.Scan accepts dest ...interface{} as arguments. Does it mean I have to scan everything and use only the column I needed?
I know I could change the sql from select * to select my_favorite_rows, however, in this case, I have no way to change the sql.
You can make use of Rows.Columns, e.g.
package main
import (
"database/sql"
"fmt"
"github.com/lib/pq"
)
type Vehicle struct {
Id int
Name string
Wheels int
}
// VehicleCol returns a reference for a column of a Vehicle
func VehicleCol(colname string, vh *Vehicle) interface{} {
switch colname {
case "id":
return &vh.Id
case "name":
return &vh.Name
case "wheels":
return &vh.Wheels
default:
panic("unknown column " + colname)
}
}
func panicOnErr(err error) {
if err != nil {
panic(err.Error())
}
}
func main() {
conn, err := pq.ParseURL(`postgres://docker:docker#172.17.0.2:5432/pgsqltest?schema=public`)
panicOnErr(err)
var db *sql.DB
db, err = sql.Open("postgres", conn)
panicOnErr(err)
var rows *sql.Rows
rows, err = db.Query("select * from vehicle")
panicOnErr(err)
// get the column names from the query
var columns []string
columns, err = rows.Columns()
panicOnErr(err)
colNum := len(columns)
all := []Vehicle{}
for rows.Next() {
vh := Vehicle{}
// make references for the cols with the aid of VehicleCol
cols := make([]interface{}, colNum)
for i := 0; i < colNum; i++ {
cols[i] = VehicleCol(columns[i], &vh)
}
err = rows.Scan(cols...)
panicOnErr(err)
all = append(all, vh)
}
fmt.Printf("%#v\n", all)
}
For unknown length of columns but if you're sure about their type,
cols, err := rows.Columns()
if err != nil {
log.Fatal(err.Error())
}
colLen := len(cols)
vals := make([]interface{}, colLen)
for rows.Next() {
for i := 0; i < len(colLen); i++ {
vals[i] = new(string)
}
err := rows.Scan(vals...)
if err != nil {
log.Fatal(err.Error()) // if wrong type
}
fmt.Printf("Column 1: %s\n", *(vals[0].(*string))) // will panic if wrong type
}
PS: Not recommended for prod

redigo, SMEMBERS, how to get strings

I am redigo to connect from Go to a redis database. How can I convert a type of []interface {}{[]byte{} []byte{}} to a set of strings? In this case I'd like to get the two strings Hello and World.
package main
import (
"fmt"
"github.com/garyburd/redigo/redis"
)
func main() {
c, err := redis.Dial("tcp", ":6379")
defer c.Close()
if err != nil {
fmt.Println(err)
}
c.Send("SADD", "myset", "Hello")
c.Send("SADD", "myset", "World")
c.Flush()
c.Receive()
c.Receive()
err = c.Send("SMEMBERS", "myset")
if err != nil {
fmt.Println(err)
}
c.Flush()
// both give the same return value!?!?
// reply, err := c.Receive()
reply, err := redis.MultiBulk(c.Receive())
if err != nil {
fmt.Println(err)
}
fmt.Printf("%#v\n", reply)
// $ go run main.go
// []interface {}{[]byte{0x57, 0x6f, 0x72, 0x6c, 0x64}, []byte{0x48, 0x65, 0x6c, 0x6c, 0x6f}}
// How do I get 'Hello' and 'World' from this data?
}
Look in module source code
// String is a helper that converts a Redis reply to a string.
//
// Reply type Result
// integer format as decimal string
// bulk return reply as string
// string return as is
// nil return error ErrNil
// other return error
func String(v interface{}, err error) (string, error) {
redis.String will convert (v interface{}, err error) in (string, error)
reply, err := redis.MultiBulk(c.Receive())
replace with
s, err := redis.String(redis.MultiBulk(c.Receive()))
Looking at the source code for the module, you can see the type signature returned from Receive will be:
func (c *conn) Receive() (reply interface{}, err error)
and in your case, you're using MultiBulk:
func MultiBulk(v interface{}, err error) ([]interface{}, error)
This gives a reply of multiple interface{} 's in a slice: []interface{}
Before an untyped interface{} you have to assert its type like so:
x.(T)
Where T is a type (eg, int, string etc.)
In your case, you have a slice of interfaces (type: []interface{}) so, if you want a string, you need to first assert that each one has type []bytes, and then cast them to a string eg:
for _, x := range reply {
var v, ok = x.([]byte)
if ok {
fmt.Println(string(v))
}
}
Here's an example: http://play.golang.org/p/ZifbbZxEeJ
You can also use a type switch to check what kind of data you got back:
http://golang.org/ref/spec#Type_switches
for _, y := range reply {
switch i := y.(type) {
case nil:
printString("x is nil")
case int:
printInt(i) // i is an int
etc...
}
}
Or, as someone mentioned, use the built in redis.String etc. methods which will check and convert them for you.
I think the key is, each one needs to be converted, you can't just do them as a chunk (unless you write a method to do so!).
Since redis.MultiBulk() now is deprecated, it might be a good way to use redis.Values() and convert the result into String:
import "github.com/gomodule/redigo/redis"
type RedisClient struct {
Conn redis.Conn
}
func (r *RedisClient) SMEMBERS(key string) interface{} {
tmp, err := redis.Values(r.Conn.Do("smembers", key))
if err != nil {
fmt.Println(err)
return nil
}
res := make([]string, 0)
for _, v := range tmp {
res = append(res, string(v.([]byte)))
}
return res
}