How to Fetch specific number of characters from a string using gorm? - sql

I am using SQLite, if I have a text post or an article that is a 400 character string for example, I want to extract only the first 100 character from it to be used in the front-end.
In line 4 i extracted the latest 5 posts from the db, but i want to limit the body of the post to the first 100 characters only
func GetLatestPosts() *[]Post {
db := database.Connect()
var posts []Post
db.Limit(5).Order("created_at desc").Find(&posts)
// SELECT LEFT(body, 100) FROM posts
// db.Raw("SELECT id, title, body, tags FROM posts").Scan(&posts)
return &posts
}
How can i do that ?

What you want is to either use Select or Raw to run the substr sqlite function on the post's body.
Like this:
err := db.Select("id, title, substr(body, 1, 100) AS body, tags").
Limit(5).
Order("created_at DESC").
Find(&posts).
Error
// error check
Or with Raw:
err := db.Raw("SELECT id, title, substr(body, 1, 100) AS body, tags FROM posts").
Scan(&posts).
Error
// error check
Key aspect to remember is to make sure the column is named body so that scanning into your model works as before.

Related

How to add "x in y" clause in ExpressJS

In Postgres I have created a simple table called tags with these columns:
tag_id
tag
owner_id
In ExpressJS, this query works fine:
return pool.query(`SELECT tag_id, tag FROM tags WHERE owner_id = $1`, [ownerId]);
Now what I want to do is restrict which tags are returned via an array of tag values I'm passing in:
const tagsCsv = convertArrayToCSV(tags); // Example: "'abc','def'"
return pool.query(`SELECT tag_id, tag FROM tags WHERE owner_id = $1 AND tag IN ($2)`, [ownerId, tagsCsv]);
The code doesn't crash but it returns an empty array when I know for a fact that both abc & def are sample tags in my table.
I thus suspect something is wrong with my syntax but am not sure what. Might anyone have any ideas?
Robert
I did more searching and found this: node-postgres: how to execute "WHERE col IN (<dynamic value list>)" query?
Following the examples in there, I stopped converting the string array to a CSV string and instead did this:
const tags: Array<string> = values.tags;
return pool.query(`SELECT tag_id, tag FROM tags WHERE owner_id = $1 AND tag = ANY($2::text[])`, [ownerId, tags]);
This worked perfectly, returning the records I was expecting!

Add array of other records from the same table to each record

My project is a Latin language learning app. My DB has all the words I'm teaching, in the table 'words'. It has the lemma (the main form of the word), along with the definition and other information the user needs to learn.
I show one word at a time for them to guess/remember what it means. The correct word is shown along with some wrong words, like:
What does Romanus mean? Greek - /Roman/ - Phoenician - barbarian
What does domus mean? /house/ - horse - wall - senator
The wrong options are randomly drawn from the same table, and must be from the same part of speech (adjective, noun...) as the correct word; but I am only interested in their lemma. My return value looks like this (some properties omitted):
[
{ lemma: 'Romanus', definition: 'Roman', options: ['Greek', 'Phoenician', 'barbarian'] },
{ lemma: 'domus', definition: 'house', options: ['horse', 'wall', 'senator'] }
]
What I am looking for is a more efficient way of doing it than my current approach, which runs a new query for each word:
// All the necessary requires are here
class Word extends Model {
static async fetch() {
const words = await this.findAll({
limit: 10,
order: [Sequelize.literal('RANDOM()')],
attributes: ['lemma', 'definition'], // also a few other columns I need
});
const wordsWithOptions = await Promise.all(words.map(this.addOptions.bind(this)));
return wordsWithOptions;
}
static async addOptions(word) {
const options = await this.findAll({
order: [Sequelize.literal('RANDOM()')],
limit: 3,
attributes: ['lemma'],
where: {
partOfSpeech: word.dataValues.partOfSpeech,
lemma: { [Op.not]: word.dataValues.lemma },
},
});
return { ...word.dataValues, options: options.map((row) => row.dataValues.lemma) };
}
}
So, is there a way I can do this with raw SQL? How about Sequelize? One thing that still helps me is to give a name to what I'm trying to do, so that I can Google it.
EDIT: I have tried the following and at least got somewhere:
const words = await this.findAll({
limit: 10,
order: [Sequelize.literal('RANDOM()')],
attributes: {
include: [[sequelize.literal(`(
SELECT lemma FROM words AS options
WHERE "partOfSpeech" = "options"."partOfSpeech"
ORDER BY RANDOM() LIMIT 1
)`), 'options']],
},
});
Now, there are two problems with this. First, I only get one option, when I need three; but if the query has LIMIT 3, I get: SequelizeDatabaseError: more than one row returned by a subquery used as an expression.
The second error is that while the code above does return something, it always gives the same word as an option! I thought to remedy that with WHERE "partOfSpeech" = "options"."partOfSpeech", but then I get SequelizeDatabaseError: invalid reference to FROM-clause entry for table "words".
So, how do I tell PostgreSQL "for each row in the result, add a column with an array of three lemmas, WHERE existingRow.partOfSpeech = wordToGoInTheArray.partOfSpeech?"
Revised
Well that seems like a different question and perhaps should be posted that way, but...
The main technique remains the same. JOIN instead of sub-select. The difference being generating the list of lemmas for then piping then into the initial query. In a single this can get nasty.
As single statement (actually this turned out not to be too bad):
select w.lemma, w.defination, string_to_array(string_agg(o.defination,','), ',') as options
from words w
join lateral
(select defination
from words o
where o.part_of_speech = w.part_of_speech
and o.lemma != w.lemma
order by random()
limit 3
) o on 1=1
where w.lemma in( select lemma
from words
order by random()
limit 4 --<<< replace with parameter
)
group by w.lemma, w.defination;
The other approach build a small SQL function to randomly select a specified number of lemmas. This selection is the piped into the (renamed) function previous fiddle.
create or replace
function exam_lemma_definition_options(lemma_array_in text[])
returns table (lemma text
,definition text
,option text[]
)
language sql strict
as $$
select w.lemma, w.definition, string_to_array(string_agg(o.definition,','), ',') as options
from words w
join lateral
(select definition
from words o
where o.part_of_speech = w.part_of_speech
and o.lemma != w.lemma
order by random()
limit 3
) o on 1=1
where w.lemma = any(lemma_array_in)
group by w.lemma, w.definition;
$$;
create or replace
function exam_lemmas(num_of_lemmas integer)
returns text[]
language sql
strict
as $$
select string_to_array(string_agg(lemma,','),',')
from (select lemma
from words
order by random()
limit num_of_lemmas
) ll
$$;
Using this approach your calling code reduces to a needs a single SQL statement:
select *
from exam_lemma_definition_options(exam_lemmas(4))
order by lemma;
This permits you to specify the numbers of lemmas to select (in this case 4) limited only by the number of rows in Words table. See revised fiddle.
Original
Instead of using a sub-select to get the option words just JOIN.
select w.lemma, w.definition, string_to_array(string_agg(o.definition,','), ',') as options
from words w
join lateral
(select definition
from words o
where o.part_of_speech = w.part_of_speech
and o.lemma != w.lemma
order by random()
limit 3
) o on 1=1
where w.lemma = any(array['Romanus', 'domus'])
group by w.lemma, w.definition;
See fiddle. Obviously this will not necessary produce the same options as your questions provides due to random() selection. But it will get matching parts of speech. I will leave translation to your source language to you; or you can use the function option and reduce your SQL to a simple "select *".

How many rows got inserted into SQL database with ts-postgres?

I use ts-postgres and INSERT INTO to add new rows to my table.
import { Client } from 'ts-postgres';
let query = '...';
let res = await Client.query(query, [username, email]);
The result I get from Client.query is the following result:
Result {names: Array(0), rows: Array(0), status: "INSERT 0 1"}
Result {names: Array(0), rows: Array(0), status: "INSERT 0 0"}
In the first case 1 line got added, in the second 0. Do I really need to parse the status string in order to see how many rows got added?
Yep, that's something you have to deal with when working at low level (without ORM)
So here's a simple function to check for inserted rows
checkInserted(result): number {
const status = result.status.split();
return parseInt(status[status.length-1]);
}
you can customize it according to your requirements
It looks like the answer is yes, you really do need to parse the status string.
According to the Postgres protocol documentation, when the database finishes executing an insert command, it sends a CommandComplete message back to the client. The CommandComplete message consists of a byte that identifies the message type, a length, and a "tag," which is a string:
For an INSERT command, the tag is INSERT oid rows, where rows is the
number of rows inserted. oid used to be the object ID of the inserted
row if rows was 1 and the target table had OIDs, but OIDs system
columns are not supported anymore; therefore oid is always 0.
That tag is the status that you are seeing. There's nothing else in the CommandComplete message.
The Node Postgres client does include a rowCount member in result, but if you look at the code, you will see that it just parses it out of the string. The Java JDBC driver also parses the string.

Control flow over query results in SQLX (lazy/eager)

I'm implementing a messages table with postgres (aws-rds) and I'm using golang as a backend to query the table.
CREATE TABLE:
CREATE TABLE IF NOT EXISTS msg.Messages(
id SERIAL PRIMARY KEY,
content BYTEA,
timestamp DATE
);
Here is the INSERT query:
INSERT INTO msg.Messages (content,timestamp) VALUES ('blob', 'date')
RETURNING id;
Now I want to be able to fetch a specific message, like this:
specific SELECT query:
SELECT id, content,timestamp
FROM msg.Messages
WHERE id = $1
Now let's say the user was offline for a long time and he needs to get a lot of messages from this table, let's say 10M messages, I don't want to return all results because it might explode the app memory.
each user saves his last message.id that he fetched, so the query will be:
SELECT id, content, timestamp
FROM msg.Messages
WHERE id > $1
Implementing paging in this query is feeling like inventing the wheel again, there must be out of the box solution for that.
I'm using sqlx, here is a rough example of my code:
query := `
SELECT id, content, timestamp
FROM msg.Messages
WHERE id > $0
`
args = 5
query = ado.db.Rebind(query)
rows, err := ado.db.Queryx(query, args...)
var res []Message
for rows.Next() {
msg := Message{}
err = rows.StructScan(&msg)
if err != nil {
return nil, err
}
res = append(res, msg)
}
return res, nil
How can I convert this code to be with lazy loading, that only on rows.next() will fetch the next item (and not loading all items in advance), and what about the garbage collector,
will it release the memory on each iteration of the row.next()??

"Operator does not exist: integer =?" when using Postgres

I have a simple SQL query called within the QueryRow method provided by go's database/sql package.
import (
"github.com/codegangsta/martini"
"github.com/martini-contrib/render"
"net/http"
"database/sql"
"fmt"
_ "github.com/lib/pq")
)
type User struct {
Name string
}
func Show(db *sql.DB, params martini.Params) {
id := params["id"]
row := db.QueryRow(
"SELECT name FROM users WHERE id=?", id)
u := User{}
err := row.Scan(&u.Name)
fmt.Println(err)
}
However, I'm getting the error pq: operator does not exist: integer =? It looks like the code doesn't understand that the ? is just a placeholder. How can I fix this?
PostgreSQL works with numbered placeholders ($1, $2, ...) natively rather than the usual positional question marks. The documentation for the Go interface also uses numbered placeholders in its examples:
rows, err := db.Query("SELECT name FROM users WHERE age = $1", age)
Seems that the Go interface isn't translating the question marks to numbered placeholders the way many interfaces do so the question mark is getting all the way to the database and confusing everything.
You should be able to switch to numbered placeholders instead of question marks:
row := db.QueryRow(
"SELECT name FROM users WHERE id = $1", id)