Control flow over query results in SQLX (lazy/eager) - sql

I'm implementing a messages table with postgres (aws-rds) and I'm using golang as a backend to query the table.
CREATE TABLE:
CREATE TABLE IF NOT EXISTS msg.Messages(
id SERIAL PRIMARY KEY,
content BYTEA,
timestamp DATE
);
Here is the INSERT query:
INSERT INTO msg.Messages (content,timestamp) VALUES ('blob', 'date')
RETURNING id;
Now I want to be able to fetch a specific message, like this:
specific SELECT query:
SELECT id, content,timestamp
FROM msg.Messages
WHERE id = $1
Now let's say the user was offline for a long time and he needs to get a lot of messages from this table, let's say 10M messages, I don't want to return all results because it might explode the app memory.
each user saves his last message.id that he fetched, so the query will be:
SELECT id, content, timestamp
FROM msg.Messages
WHERE id > $1
Implementing paging in this query is feeling like inventing the wheel again, there must be out of the box solution for that.
I'm using sqlx, here is a rough example of my code:
query := `
SELECT id, content, timestamp
FROM msg.Messages
WHERE id > $0
`
args = 5
query = ado.db.Rebind(query)
rows, err := ado.db.Queryx(query, args...)
var res []Message
for rows.Next() {
msg := Message{}
err = rows.StructScan(&msg)
if err != nil {
return nil, err
}
res = append(res, msg)
}
return res, nil
How can I convert this code to be with lazy loading, that only on rows.next() will fetch the next item (and not loading all items in advance), and what about the garbage collector,
will it release the memory on each iteration of the row.next()??

Related

Gorm Go - Query with an empty slice of primary keys

The Gorm documentation for Struct & Map Conditions provides the following snippet for querying a table with a slice of primary keys
// Slice of primary keys
db.Where([]int64{20, 21, 22}).Find(&users)
// SELECT * FROM users WHERE id IN (20, 21, 22);
However, if the slice is empty then all records are returned. Looking at the source code for the Find function I can see conditions are only added if len(conds) > 0
// Find find records that match given conditions
func (db *DB) Find(dest interface{}, conds ...interface{}) (tx *DB) {
tx = db.getInstance()
if len(conds) > 0 {
if exprs := tx.Statement.BuildCondition(conds[0], conds[1:]...); len(exprs) > 0 {
tx.Statement.AddClause(clause.Where{Exprs: exprs})
}
}
tx.Statement.Dest = dest
return tx.callbacks.Query().Execute(tx)
}
This is the opposite of what my SQLite command line returns. If the condition is empty then no records are returned (because they all have a primary key)
-- no records returned
SELECT * FROM my_table WHERE id IN ();
Question
Is there a way to query a slice of primary keys using Gorm such that if the slice is empty, no records are returned?
Since primary keys increases from 1, 0 id could be used in a empty query.
ids := []int64{20, 21, 22}
db.Where(append([]int64{0}, ids...)).Find(&users)

How to Fetch specific number of characters from a string using gorm?

I am using SQLite, if I have a text post or an article that is a 400 character string for example, I want to extract only the first 100 character from it to be used in the front-end.
In line 4 i extracted the latest 5 posts from the db, but i want to limit the body of the post to the first 100 characters only
func GetLatestPosts() *[]Post {
db := database.Connect()
var posts []Post
db.Limit(5).Order("created_at desc").Find(&posts)
// SELECT LEFT(body, 100) FROM posts
// db.Raw("SELECT id, title, body, tags FROM posts").Scan(&posts)
return &posts
}
How can i do that ?
What you want is to either use Select or Raw to run the substr sqlite function on the post's body.
Like this:
err := db.Select("id, title, substr(body, 1, 100) AS body, tags").
Limit(5).
Order("created_at DESC").
Find(&posts).
Error
// error check
Or with Raw:
err := db.Raw("SELECT id, title, substr(body, 1, 100) AS body, tags FROM posts").
Scan(&posts).
Error
// error check
Key aspect to remember is to make sure the column is named body so that scanning into your model works as before.

Sqlboiler get only the desired columns

I am trying to follow the examples in sqlboiler (https://github.com/volatiletech/sqlboiler). But couldn't find a way to get just the columns queried in the select statement?
users, err := models.Users(
Select("id", "name"),
Where("age > ?", 30),
).All(ctx, db)
In this example, .All returns entire tuple containing empty/nil values of columns not queried. I was wondering if there is a way to return a map/list (or any relevant data structure/format) of just the queried columns. Thanks!
You get all the fields, because you get instances of models.User, which have all the fields, you want them or not.
One thing you can do is write your own cut-down User struct, and bind to that.
type LiteUser struct {
ID int `boil:"id"`
Name string `boil:"name"`
}
var users []*LiteUser
err := models.Users(
Select("id", "name"),
Where("age > ?", 30),
).Bind(ctx, db, &users)

Berkeley DB equivalent of SELECT COUNT(*) All, SELECT COUNT(*) WHERE LIKE "%...%"

I'm looking for Berkeley DB equivalent of
SELECT COUNT All, SELECT COUNT WHERE LIKE "%...%"
I have got 100 records with keys: 1, 2, 3, ... 100.
I have got the following code:
//Key = 1
i=1;
strcpy_s(buf, to_string(i).size()+1, to_string(i).c_str());
key.data = buf;
key.size = to_string(i).size()+1;
key.flags = 0;
data.data = rbuf;
data.size = sizeof(rbuf)+1;
data.flags = 0;
//Cursor
if ((ret = dbp->cursor(dbp, NULL, &dbcp, 0)) != 0) {
dbp->err(dbp, ret, "DB->cursor");
goto err1;
}
//Get
dbcp->get(dbcp, &key, &data_read, DB_SET_RANGE);
db_recno_t cnt;
dbcp->count(dbcp, &cnt, 0);
cout <<"count: "<<cnt<<endl;
Count cnt is always 1 but I expect it calculates all the partial key matches for Key=1: 1, 10, 11, 21, ... 91.
What is wrong in my code/understanding of DB_SET_RANGE ?
Is it possible to get SELECT COUNT WHERE LIKE "%...%" in BDB ?
Also is it possible to get SELECT COUNT All records from the file ?
Thanks
You're expecting Berkeley DB to be way more high-level than it actually is. It doesn't contain anything like what you're asking for. If you want the equivalent of WHERE field LIKE '%1%' you have to make a cursor, read through all the values in the DB, and do the string comparison yourself to pick out the ones that match. That's what an SQL engine actually does to implement your query, and if you're using libdb instead of an SQL engine, it's up to you. If you want it done faster, you can use a secondary index (much like you can create additional indexes for a table in SQL), but you have to provide some code that links the secondary index to the main DB.
DB_SET_RANGE is useful to optimize a very specific case: you're looking for items whose key starts with a specific substring. You can DB_SET_RANGE to find the first matching key, then DB_NEXT your way through the matches, and stop when you get a key that doesn't match. This works only on DB_BTREE databases because it depends on the keys being returned in lexical order.
The count method tells you how many exact duplicate keys there are for the item at the current cursor position.
You can use method DB->stat().
For example, number of unique keys in the BT_TREE.
bool row_amount(DB *db, size_t &amount) {
amount = 0;
if (db==NULL) return false;
DB_BTREE_STAT *sp;
int ret = db->stat(db, NULL, &sp, 0);
if(ret!=0) return false;
amount = (size_t)sp->bt_nkeys;
return true;
}

Sql Select - Total Rows Returned

Using the database/sql package and drivers for Postgres and Mysql I want to achieve the following. I want to be able to Select one row and know that there is either zero rows, one row, or more than one row. the QueryRow function does not achieve that, because as far as I can ascertain, it will return one row without error regardless of if there is more than one row. For my situation, more than one row may be an error, and I want to know about it. I want to create a general function to do this.I looked at creating a function that uses the Query function, but I do not know how to return the first row if there is more than one row. I want to return the fact that there is more than one row, but I also want to return the first row. To determine that there is more than one row, I have to do a Next, and that overwrites the first row. Obviously I can achieve this without creating a general function, but I want a function to do it because I need to do this in a number of placesCould someone please explain to me how to achieve this. IE. To return the first row from a function when a successful Next has been done or the Next returned nothing.
I'm using both database/sql & MySQLDriver to achieve this. You can download MySQLDriver at https://github.com/go-sql-driver/ .
I wrote execQuery function myself to get one or more rows from database. It's based on MySQL but I think it can also used to Postgres with similar implement.
Assume you have a DB table named test, and have rows named id, name, age.
Code:
var db *sql.DB // it should be initialized by "sql.Open()"
func execQuery(SQL string, args ...interface{}) (rows *sql.Rows, is_succeed bool) {
rows, err := db.Query(SQL, args...)
var ret bool
if err == nil && rows != nil { // if DB query error rows will be nil, it will return false
ret = true
} else {
ret = false
}
return rows, ret
}
Usage:
var name, age string
rows, is_succeed = execQuery("SELECT `name`, `age` FROM `test` WHERE `id` = ?", "123")
if !is_succeed {
// error
return
}
for rows.Next() { // if have zero result rows, this for route won't execute
err := rows.Scan(&name, &age)
// check if has error & do something
}
If you want to know how much rows returned, just add a counter in for route, use SQL can also achieve this.
sql.Rows likes a list structure, rows *sql.Rows points on first row of returned rows. Use rows.Next() to traverse every rows. I think that's what you've asked.
If you really want to know rows count very often, using a cache mechanic like memcacheDB or Redis or just implement a simple counter yourself can help you solve the problem.