How can i unlock the Database in Go - sql

Im a newbie in go and not the best in sql.
I have a simple Table in my Database with the name of users. I store the SAM, First Name and Last Name in the table. When i now try to change something in the database, i get the error database is locked. Thats my code:
func createNewUser(w http.ResponseWriter, r *http.Request) {
var user User
err := decodeJSONBody(w, r, &user)
if checkError(w, err) {
return
}
rows, err := mainDB.Query("SELECT * FROM users WHERE SAM = ?", user.Sam)
if checkError(w, err) {
return
}
defer rows.Close()
if rows.Next() {
http.Error(w, "User already exists", http.StatusConflict)
return
}
_, err = mainDB.Exec("INSERT INTO users (SAM, Vorname, Nachname) VALUES (?, ?, ?)", user.Sam, user.Vorname, user.Nachname)
if checkError(w, err) {
return
}
json.NewEncoder(w).Encode(user)
}
decodeJSONBody and checkError work and have nothing to do with the database.
And as far as I've learned, rows.Close should close the columns so that I can write something back in

As per the comments SQLite has some limitations around locking/concurrency which means you need to take care when running multiple statements concurrently. Unfortunately I had not reviewed your code in detail when posting my comment so, despite seemingly solving the issue, it was in error.
You had added a defer rows.Close(); this will free up the database connection used to run the query but, due to the defer, this will only happen when the surrounding function returns. Normally this is not a big issue because looping through a result set in its entirety automatically closes the rows. The documentation states:
If Next is called and returns false and there are no further result sets, the Rows are closed automatically and it will suffice to check the result of Err.
In your code you do return if rows.Next() is true:
if rows.Next() {
http.Error(w, "User already exists", http.StatusConflict)
return
}
This means that adding an extra rows.Close() should not be needed. However as you say "added rows.Close() multiple times, and now it works" I suspect that your full code may have been a bit more complicated than that presented (and one of the added rows.Close() was needed).
So adding extra calls to rows.Close() should not be needed; it will not cause an issue (other than an unnecessary function call). However you should check for errors:
rows, err := mainDB.Query("SELECT * FROM users WHERE SAM = ?", user.Sam)
if checkError(w, err) {
rows.Close()
return
}
if rows.Next() {
http.Error(w, "User already exists", http.StatusConflict)
return
}
if err = rows.Err(); err != nil {
return // It's worth checking fort an error here
}
Note that the FAQ for go-sqlite3 includes information on dealing with "Error: database is locked" (and it's worth ensuring you follow the recommendations).
Note2: Consider using EXISTS instead of running the query and then attempting to fetch a row - it is likely to be faster and allows you to use QueryRow which simplifies your code.

Related

PlaceHolderFormat doesn't replace the dollar sign for the parameter value during SQL using pgx driver for postgres

I am new to Go and am trying to check a password against a username in a postgresql database.
I can't get dollar substitution to occur and would rather not resort to concatenating strings.
I am currently using squirrel but also tried it without and didn't have much luck.
I have the following code:
package datalayer
import (
"database/sql"
"encoding/json"
"fmt"
"net/http"
sq "github.com/Masterminds/squirrel"
_ "github.com/jackc/pgx/v4/stdlib"
"golang.org/x/crypto/bcrypt"
"github.com/gin-gonic/gin"
)
var (
// for the database
db *sql.DB
)
func InitDB(sqlDriver string, dataSource string) error {
var err error
// Connect to the postgres db (sqlDriver is literal string "pgx")
db, err = sql.Open(sqlDriver, dataSource)
if err != nil {
panic(err)
}
return db.Ping()
}
// Create a struct that models the structure of a user, both in the request body, and in the DB
type Credentials struct {
Password string `json:"password", db:"password"`
Username string `json:"username", db:"username"`
}
func Signin(c *gin.Context) {
// Parse and decode the request body into a new `Credentials` instance
creds := &Credentials{}
err := json.NewDecoder(c.Request.Body).Decode(creds)
if err != nil {
// If there is something wrong with the request body, return a 400 status
c.Writer.WriteHeader(http.StatusBadRequest)
return
}
query := sq.
Select("password").
From("users").
Where("username = $1", creds.Username).
PlaceholderFormat(sq.Dollar)
// The line below doesn't substitute the $ sign, it shows this: SELECT password FROM users WHERE username = $1 [rgfdgfd] <nil>
fmt.Println(sq.
Select("password").
From("users").
Where("username = $1", creds.Username).
PlaceholderFormat(sq.Dollar).ToSql())
rows, sqlerr := query.RunWith(db).Query()
if sqlerr != nil {
panic(fmt.Sprintf("QueryRow failed: %v", sqlerr))
}
if err != nil {
// If there is an issue with the database, return a 500 error
c.Writer.WriteHeader(http.StatusInternalServerError)
return
}
// We create another instance of `Credentials` to store the credentials we get from the database
storedCreds := &Credentials{}
// Store the obtained password in `storedCreds`
err = rows.Scan(&storedCreds.Password)
if err != nil {
// If an entry with the username does not exist, send an "Unauthorized"(401) status
if err == sql.ErrNoRows {
c.Writer.WriteHeader(http.StatusUnauthorized)
return
}
// If the error is of any other type, send a 500 status
c.Writer.WriteHeader(http.StatusInternalServerError)
return
}
// Compare the stored hashed password, with the hashed version of the password that was received
if err = bcrypt.CompareHashAndPassword([]byte(storedCreds.Password), []byte(creds.Password)); err != nil {
// If the two passwords don't match, return a 401 status
c.Writer.WriteHeader(http.StatusUnauthorized)
}
fmt.Printf("We made it !")
// If we reach this point, that means the users password was correct, and that they are authorized
// The default 200 status is sent
}
I see the following when I check pgAdmin, which shows the dollar sign not being substituted:
The substitution of the placeholders is done by the postgres server, it SHOULD NOT be the job of the Go code, or squirrel, to do the substitution.
When you are executing a query that takes parameters, a rough outline of what the database driver has to do is something like the following:
Using the query string, with placeholders untouched, a parse request is sent to the postgres server to create a prepared statement.
Using the parameter values and the identifier of the newly-created statement, a bind request is sent to make the statement ready for execution by creating a portal. A portal (similar to, but not the same as, a cursor) represents a ready-to-execute or already-partially-executed statement, with any missing parameter values filled in.
Using the portal's identifier an execute request is sent to the server which then executes the portal's query.
Note that the above steps are just a rough outline, in reality there are more request-response cycles involved between the db client and server.
And as far as pgAdmin is concerned I believe what it is displaying to you is the prepared statement as created by the parse request, although I can't tell for sure as I am not familiar with it.
In theory, a helper library like squirrel, or a driver library like pgx, could implement the substitution of parameters themselves and then send a simple query to the server. In general, however, given the possibility of SQL injections, it is better to leave it to the authority of the postgres server, in my opinion.
The PlaceholderFormat's job is to simply translate the placeholder to the specified format. For example you could write your SQL using the MySQL format (?,?,...) and then invoke the PlaceholderFormat(sql.Dollar) method to translate that into the PostgreSQL format ($1,$2,...).

Gorm add multiple slices in inserting in a many to many

I'm new using go and gorm. I'm trying to insert many values in one SQL query.
I wrote this query to add multiple conversations to a user:
relationUserConversation := make([][]uint, len(users))
for i, v := range users {
relationUserConversation[i] = []uint{conversation.ID, v}
}
result = r.db.Debug().Exec(
"INSERT INTO `user_has_conversations` (`user_has_conversations`.`conversation_id`, `user_has_conversations`.`user_id`) VALUES ?",
relationUserConversation, // If i do this it works relationUserConversation[0], relationUserConversation[1]
// The issue is because the query has this value "VALUES ((35,1),(35,2))", but should be to work (35,1),(35,2)
)
I also tried to add it directly with the conversation that would be what I would like to do, but I'm having issue trying to add the relation with the many to many because instead of creating the relation between the user and the conversation it tries to add the user.
My conversation model:
type Conversation struct {
ID uint `gorm:"primarykey"`
Users []*User `gorm:"many2many:user_has_conversations;"`
Messages []ConversationMessage
}
Would be great if i could create a new conversation with the related users in one query instead of creating first the conversation and after the relation to the users.
Below is a minimum working example using the Gorm Appends method (see documentation here) to create a many to many association between two (or more) models. Hopefully you can adapt this to your use case.
package main
import (
"fmt"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
)
type User struct {
gorm.Model
Name string
Conversations []Conversation `gorm:"many2many:user_conversations;"`
}
type Conversation struct {
gorm.Model
Name string
Users []*User `gorm:"many2many:user_conversations;"`
}
func main() {
db, err := gorm.Open(sqlite.Open("many2many.db"), &gorm.Config{})
if err != nil {
panic("failed to connect database")
}
// Migrate the schema
err = db.AutoMigrate(&User{}, &Conversation{})
if err != nil {
fmt.Print(err)
}
userOne := User{
Name: "User One",
}
userTwo := User{
Name: "User Two",
}
// Create users
db.Create(&userOne)
db.Create(&userTwo)
conversation := Conversation{
Name: "Conversation One",
}
// Create conversation
db.Create(&conversation)
// Append users
err = db.Model(&conversation).Association("Users").Append([]User{userOne, userTwo})
if err != nil {
fmt.Print(err)
}
for _, convUser := range conversation.Users {
fmt.Println("Hello I am in the conversation: " + convUser.Name)
}
// Clean up database
db.Delete(&userOne)
db.Delete(&userTwo)
db.Delete(&conversation)
}
Number of queries
If you enable Debug() on Gorm:
err = db.Debug().Model(&conversation).Association("Users").Append([]User{userOne, userTwo})
It shows this:
[0.144ms] [rows:2] INSERT INTO `user_conversations`
(`conversation_id`,`user_id`) VALUES (8,15),(8,16) ON CONFLICT DO NOTHING
The Values part is correct (what you were trying to do manually) and achieved using the ORM.

gorm raw sql query execution

Am running a query to check if a table exists or not using the gorm orm for golang. Below is my code.
package main
import (
"fmt"
"log"
"gorm.io/driver/postgres"
"gorm.io/gorm"
_ "github.com/lib/pq"
)
// App sets up and runs the app
type App struct {
DB *gorm.DB
}
`const tableCreationQuery = `SELECT count (*)
FROM information_schema.TABLES
WHERE (TABLE_SCHEMA = 'api_test') AND (TABLE_NAME = 'Users')`
func ensureTableExists() {
if err := a.DB.Exec(tableCreationQuery); err != nil {
log.Fatal(err)
}
}`
The expected response should be either 1 or 0. I got this from another SO answer. Instead I get this
2020/09/03 00:27:18 &{0xc000148900 1 0xc000119ba0 0}
exit status 1
FAIL go-auth 0.287s
My untrained mind says its a pointer but how do I reference the returned values to determine what was contained within?
If you want to check if your SQL statement was successfully executed in GORM you can use the following:
tx := DB.Exec(sqlStr, args...)
if tx.Error != nil {
return false
}
return true
However in your example are using a SELECT statement then you need to check the result, which will be better suited to use the DB.Raw() method like below
var exists bool
DB.Raw(sqlStr).Row().Scan(&exists)
return exists

BigQuery Schema Update while copying data from other tables

I have table1 which has lots of nested columns. And table2 has some additional columns which may have nested columns. I'm using golang client library.
Is there any way to update the schema while copying from one table to another table..?
Sample Code :
dataset := client.Dataset("test")
copier=dataset.Table(table1).CopierFrom(dataset.Table(table2))
copier.WriteDisposition = bigquery.WriteAppend
copier.CreateDisposition = bigquery.CreateIfNeeded
job, err = copier.Run(ctx)
if err != nil {
fmt.Println("error while run :", err)
}
status, err = job.Wait(ctx)
if err != nil {
fmt.Println("error in wait :", err)
}
if err := status.Err(); err != nil {
fmt.Println("error in status :", err)
}
Some background first:
I created 2 Tables under the data collection test as following:
1 Schema: name (String), age (Integer)
"Varun", 19
"Raja", 27
2 Schema pet_name (String), type (String)
"jimmy", "dog"
"ramesh", "cat"
Note that the two relations have different schemas.
Here I am copying the contents of data table 2 into 1. The bigquery.WriteAppend tells the query engine to append results of table 2 into 1.
test := client.Dataset("test")
copier := test.Table("1").CopierFrom(test.Table("2"))
copier.WriteDisposition = bigquery.WriteAppend
if _, err := copier.Run(ctx); err != nil {
log.Fatalln(err)
}
query := client.Query("SELECT * FROM `test.1`;")
results, err := query.Read(ctx)
if err != nil {
log.Fatalln(err)
}
for {
row := make(map[string]bigquery.Value)
err := results.Next(&row)
if err == iterator.Done {
return
}
if err != nil {
log.Fatalln(err)
}
fmt.Println(row)
}
Nothing happens and the result is:
map[age:19 name:Varun]
map[name:Raja age:27]
Table 1, the destination is unchanged.
What if source and destination had the same schemas in the copy?
For example:
copier := test.Table("1").CopierFrom(test.Table("1"))
Then the copy succeeds! Add table 1 has twice the rows that initially had.
map[name:Varun age:19]
map[age:27 name:Raja]
map[name:Varun age:19]
map[name:Raja age:27]
But what if we somehow wanted to combine tables even with different schemas?
Well first you need a GCP Billing account as your are technically doing Data Manipulations (DML). You can get $300 free credit.
Then the following will work
query := client.Query("SELECT * FROM `test.2`;")
query.SchemaUpdateOptions = []string{"ALLOW_FIELD_ADDITION", "ALLOW_FIELD_RELAXATION"}
query.CreateDisposition = bigquery.CreateIfNeeded
query.WriteDisposition = bigquery.WriteAppend
query.QueryConfig.Dst = client.Dataset("test").Table("1")
results, err := query.Read(ctx)
And the result is
map[pet_name:<nil> type:<nil> name:Varun age:19]
map[name:Raja age:27 pet_name:<nil> type:<nil>]
map[pet_name:ramesh type:cat name:<nil> age:<nil>]
map[pet_name:jimmy type:dog name:<nil> age:<nil>]
EDIT
Instead of query.Read() you can use query.Run() if you just want to run the query and not fetch results back, as show below:
if _, err := query.Run(ctx); err != nil {
log.Fatalln(err)
}
Important things to note:
We have set query.SchemaUpdateOptions to include ALLOW_FIELD_ADDITION which will allow for the resulting table to have columns not originally present.
We have set query.WriteDisposition to bigquery.WriteAppend for data to be appended.
We have set query.QueryConfig.Dst to client.Dataset("test").Table("1") which means the result of the query will be uploaded to 1.
Values that are not in both tables but in just one are nullified or set to nil in Golang sense.
This hack will give you the same results as combining two tables.
Hope this helps.

"sql: no rows in result set"

I am handling user auth data posted to my Go backend through an HTML form. I am building on some boilerplate to learn Go better.
My problem is what the following func returns:
func (ctrl UserController) Signin(c *gin.Context) {
var signinForm forms.SigninForm
user, err := userModel.Signin(signinForm)
if err := c.ShouldBindWith(&signinForm, binding.Form); err != nil {
c.JSON(406, gin.H{"message": "Invalid signin form", "form": signinForm})
c.Abort()
return
}
if err == nil {
session := sessions.Default(c)
session.Set("user_id", user.ID)
session.Set("user_email", user.Email)
session.Set("user_name", user.Name)
session.Save()
c.JSON(200, gin.H{"message": "User signed in", "user": user})
} else {
c.JSON(406, gin.H{"message": "Invalid signin details", "error": err.Error()})
}
}
The first if statement validates the input, and that works fine (error if the email isn't in proper email format, no error if it is). However, if input is properly validated, the else clause of the second statement is triggered, and the following JSON is returned:
{
error: "sql: no rows in result set",
message: "Invalid signin details"
}
It is probably useful to also post the relevant code in my User model:
//User ...
type User struct {
ID int `db:"id, primarykey, autoincrement" json:"id"`
Email string `db:"email" json:"email"`
Password string `db:"password" json:"-"`
Name string `db:"name" json:"name"`
UpdatedAt int64 `db:"updated_at" json:"updated_at"`
CreatedAt int64 `db:"created_at" json:"created_at"`
}
//UserModel ...
type UserModel struct{}
//Signin ...
func (m UserModel) Signin(form forms.SigninForm) (user User, err error) {
err = db.GetDB().SelectOne(&user, "SELECT id, email, password, name, updated_at, created_at FROM public.user WHERE email=LOWER($1) LIMIT 1", form.Email)
if err != nil {
return user, err
}
bytePassword := []byte(form.Password)
byteHashedPassword := []byte(user.Password)
err = bcrypt.CompareHashAndPassword(byteHashedPassword, bytePassword)
if err != nil {
return user, errors.New("Invalid password")
}
return user, nil
}
How do I resolve the sql: no rows in result set error?
You should change the order of operations in your code.
At first you need to get data from request with if err := c.ShouldBindWith(&signinForm, binding.Form); err != nil { and after that you need to try get data from database with user, err := userModel.Signin(signinForm)
There are good reasons for this. Ideally it tries to read paths by the separator, meaning /path/abcd/ and /path/abcd are different. abcd IS the resource in the latter while the resource lies somewhere within abcd in the former. With that in mind, then it will not be able to route properly to /path/abcd if you also have a path /path as well. In order to remove that ambiguity as to which handler to use, you need to mention the handler for the more specific path ie, /path/abcd before the more generic one /path.