Dealing with collections in SQL - sql

I have this table in which I want to register likes from users. The data type for likes is an array. INT[] I don't know if this is the best approach to handle like and unlike
I have not found an effective way to manipulate collection in order to toggle like/unlike from a user.
can we use sthg like an associative array in SQL? or, what would be the best approach with an array ? I could not find one example.
Thks for pointing me in the right direc†ion.
CREATE TABLE posts (
pid SERIAL PRIMARY KEY,
user_id INT REFERENCES users(uid),
author VARCHAR REFERENCES users(username),
title VARCHAR(255),
content TEXT,
date_created TIMESTAMP,
like_user_id INT[] DEFAULT ARRAY[]::INT[],
likes INT DEFAULT 0
);
const likePost = (req, res, next) => {
const values = [req.body.id, req.body.user_id];
console.log(values);
const query = `UPDATE posts SET likes = likes - 1 WHERE pid = $1, UPDATE posts SET likes[1] = $2 WHERE pid = $1`;
pool.query(query, values, (q_err, q_res) => {
if (q_err) return next(q_err);
res.json(`post ${req.body.id} succcessfully remove 👍`);
});
};

Related

Why is POST request with pool.query only works intermittently when using :id in the middle of URL?

I wasn't quite sure how to phrase this question so feel free to make corrections to improve it as desired.
My goal is to make an HTTP POST that will create comments for a post and add the comment to the database comments table. I believe this necessitates doing an INSERT as well as a JOIN to add the specific POST id to the comment.
This is my first time including two requests in one query so I am unsure if this is correct. I had read about using a UNION but haven't been able to figure out the correct syntax as none of the examples included quotes '' around their requests.
My post route:
router.post(`/posts/:id/comments`, (request, response, next) => {
const { id } = request.params; // tried with and without brackets {}
const { comment_body } = request.body;
// Testing for correct params
console.log(id);
console.log(comment_body);
pool.query(
'INSERT INTO comments(comment_body) VALUES($1)',
[post_id, comment_body],
'SELECT * FROM comments JOIN posts ON posts.post_id = commments.post_id',
(err, res) => {
if (err) return next(err);
}
);
});
What is strange is that this worked twice then stopped working. There are two entries in the comments table but any further posts don't do anything. This only worked from the comments form and not yet in Postman
This worked in two separate tests. When using brackets around the id, the post was created in the table but no post_id was joined on this table:
const { id } = request.params;
If I didn't use the brackets, the post_id was created in the data table:
const id = request.params;
Here are my tables:
CREATE TABLE posts(
post_id SERIAL,
user_id INT,
post_body CHARACTER varying(20000)
);
CREATE TABLE comments(
id SERIAL,
post_id INT,
user_id INT,
comment_body CHARACTER varying(20000)
);
Originally I had the post_id for comments set as serial but figured if that is supposed to be joined from the posts.post_id, it would probably need to be INT.
Thanks much for any direction.
I managed to solve this with the following:
router.post(`/posts/:id/comments`, async (request, response, next) => {
try {
const { id } = request.params;
const { comment_body } = request.body;
await pool.query
('INSERT INTO comments(post_id, comment_body) VALUES($1, $2)',
[id, comment_body]);
} catch(error) {
console.log(error.message)
}
});
Rather than using the JOIN, I just included the posts ID parameter in the original INSERT and imported it that way. I had initially thought I had to do it as a join but couldn't get a second SQL request to work. Thanks to snakecharmerb for the idea.
I also added async/await.

SQLite: Foreign Key "ON DELETE SET NULL" action not getting triggered

Why is ON DELETE SET NULL failing when deleting a row via the application code, but it behaves correctly when manually executing an SQL statement?
I have a todo table and a category table. The todo table has a category_id foreign key that references id in the category table, and it was created with the "ON DELETE SET NULL" action.
create table `category` (
`id` integer not null primary key autoincrement,
`name` varchar(255) not null
);
create table `todo` (
`id` integer not null primary key autoincrement,
`title` varchar(255) not null,
`complete` boolean not null default '0',
`category_id` integer,
foreign key(`category_id`) references `category`(`id`) on delete SET NULL on update CASCADE
);
I also have an endpoint in my application that allows users to delete a category.
categoryRouter.delete('/:id', async (req, res) => {
const { id } = req.params
await req.context.models.Category.delete(id)
return res.status(204).json()
})
This route successfully deletes categories, but the problem is that related todo items are not getting their category_id property set to null, so they end up with a category id that no longer exists. Strangely though, if I open up my database GUI and manually execute the query to delete a category... DELETE FROM category WHERE id=1... the "ON DELETE SET NULL" hook is successfully firing. Any todo item that had category_id=1 is now set to null.
Full application source can be found here.
Figured it out, thanks to MikeT.
So apparently SQLite by default has foreign key support turned off. WTF!
To enable FKs, I had to change my code from this...
const knex = Knex(knexConfig.development)
Model.knex(knex)
to this...
const knex = Knex(knexConfig.development)
knex.client.pool.on('createSuccess', (eventId, resource) => {
resource.run('PRAGMA foreign_keys = ON', () => {})
})
Model.knex(knex)
Alternatively, I could have done this inside of the knexfile.js...
module.exports = {
development: {
client: 'sqlite3',
connection: {
filename: './db.sqlite3'
},
pool: {
afterCreate: (conn, cb) => {
conn.run('PRAGMA foreign_keys = ON', cb)
}
}
},
staging: {},
production: {}
}
FYI and other people who stumbled across a similar problem, you need PRAGMA foreign_keys = ON not only for the child table but also for the parent table.
When I set PRAGMA foreign_keys = ON only for a program which handles the child table, ON UPDATE CASCADE was enabled but ON DELETE SET NULL was still disabled. At last I found out that I forgot PRAGMA foreign_keys = ON for another program which handles the parent table.

Optional column update if provided value for column is not null

I have following table:
CREATE TABLE IF NOT EXISTS categories
(
id SERIAL PRIMARY KEY,
title CHARACTER VARYING(100) NOT NULL,
description CHARACTER VARYING(200) NULL,
category_type CHARACTER VARYING(100) NOT NULL
);
I am using pg-promise, and I want to provide optional update of columns:
categories.update = function (categoryTitle, toUpdateCategory) {
return this.db.oneOrNone(sql.update, [
categoryTitle,
toUpdateCategory.title, toUpdateCategory.category_type, toUpdateCategory.description,
])
}
categoryName - is required
toUpdateCategory.title - is required
toUpdateCategory.category_type - is optional (can be passed or undefined)
toUpdateCategory.description - is optional (can be passed or undefined)
I want to build UPDATE query for updating only provided columns:
UPDATE categories
SET title=$2,
// ... SET category_type=$3 if $3 is no NULL otherwise keep old category_type value
// ... SET description=$4 if $4 is no NULL otherwise keep old description value
WHERE title = $1
RETURNING *;
How can I achieve this optional column update in Postgres?
You could coalesce between the old and the new values:
UPDATE categories
SET title=$2,
category_type = COALESCE($3, category_type),
description = COALESCE($4, description) -- etc...
WHERE title = $1
The helpers syntax is best for any sort of dynamic logic with pg-promise:
/* logic for skipping columns: */
const skip = c => c.value === null || c.value === undefined;
/* reusable/static ColumnSet object: */
const cs = new pgp.helpers.ColumnSet(
[
'title',
{name: 'category_type', skip},
{name: 'description', skip}
],
{table: 'categories'});
categories.update = function(title, category) {
const condition = pgp.as.format(' WHERE title = $1', title);
const update = () => pgp.helpers.update(category, cs) + condition;
return this.db.none(update);
}
And if your optional column-properties do not even exist on the object when they are not specified, you can simplify the skip logic to just this (see Column logic):
const skip = c => !c.exists;
Used API: ColumnSet, helpers.update.
See also a very similar question: Skip update columns with pg-promise.

Upsert in KnexJS

I have an upsert query in PostgreSQL like:
INSERT INTO table
(id, name)
values
(1, 'Gabbar')
ON CONFLICT (id) DO UPDATE SET
name = 'Gabbar'
WHERE
table.id = 1
I need to use knex to this upsert query. How to go about this?
So I solved this using the following suggestion from Dotnil's answer on Knex Issues Page:
var data = {id: 1, name: 'Gabbar'};
var insert = knex('table').insert(data);
var dataClone = {id: 1, name: 'Gabbar'};
delete dataClone.id;
var update = knex('table').update(dataClone).whereRaw('table.id = ' + data.id);
var query = `${ insert.toString() } ON CONFLICT (id) DO UPDATE SET ${ update.toString().replace(/^update\s.*\sset\s/i, '') }`;
return knex.raw(query)
.then(function(dbRes){
// stuff
});
Hope this helps someone.
As of knex#v0.21.10+ a new method onConflict was introduced.
Official documentation says:
Implemented for the PostgreSQL, MySQL, and SQLite databases. A
modifier for insert queries that specifies alternative behaviour in
the case of a conflict. A conflict occurs when a table has a PRIMARY
KEY or a UNIQUE index on a column (or a composite index on a set of
columns) and a row being inserted has the same value as a row which
already exists in the table in those column(s). The default behaviour
in case of conflict is to raise an error and abort the query. Using
this method you can change this behaviour to either silently ignore
the error by using .onConflict().ignore() or to update the existing
row with new data (perform an "UPSERT") by using
.onConflict().merge().
So in your case, the implementation would be:
knex('table')
.insert({
id: id,
name: name
})
.onConflict('id')
.merge()
I've created a function for doing this and described it on the knex github issues page (along with some of the gotchas for dealing with composite unique indices).
const upsert = (params) => {
const {table, object, constraint} = params;
const insert = knex(table).insert(object);
const update = knex.queryBuilder().update(object);
return knex.raw(`? ON CONFLICT ${constraint} DO ? returning *`, [insert, update]).get('rows').get(0);
};
Example usage:
const objToUpsert = {a:1, b:2, c:3}
upsert({
table: 'test',
object: objToUpsert,
constraint: '(a, b)',
})
A note about composite nullable indices
If you have a composite index (a,b) and b is nullable, then values (1, NULL) and (1, NULL) are considered mutually unique by Postgres (I don't get it either).
Yet another approach I could think of!
exports.upsert = (t, tableName, columnsToRetain, conflictOn) => {
const insert = knex(tableName)
.insert(t)
.toString();
const update = knex(tableName)
.update(t)
.toString();
const keepValues = columnsToRetain.map((c) => `"${c}"=${tableName}."${c}"`).join(',');
const conflictColumns = conflictOn.map((c) => `"${c.toString()}"`).join(',');
let insertOrUpdateQuery = `${insert} ON CONFLICT( ${conflictColumns}) DO ${update}`;
insertOrUpdateQuery = keepValues ? `${insertOrUpdateQuery}, ${keepValues}` : insertOrUpdateQuery;
insertOrUpdateQuery = insertOrUpdateQuery.replace(`update "${tableName}"`, 'update');
insertOrUpdateQuery = insertOrUpdateQuery.replace(`"${tableName}"`, tableName);
return Promise.resolve(knex.raw(insertOrUpdateQuery));
};
very simple.
Adding onto Dorad's answer, you can choose specific columns to upsert using merge keyword.
knex('table')
.insert({
id: id,
name: name
})
.onConflict('id')
.merge(['name']); // put column names inside an array which you want to merge.

The nested query does not have the appropriate keys

Firstly I have a function which takes 2 parameters( longitude, latitude).
RETURNS TABLE
AS
RETURN
(
select dbo.GeoCalculateDistance (#lat1Degrees,#lon1Degrees,Latitude,Longitude) as Distance, PKRestaurantId as PkKeyId from StRestaurant
)
And as you realise, I have a table that called StRestaurant. In this table I have 4 columns(PkRestaurantId, RegionId , Longitude, Latitude).
And, I need a method that takes 4 parameters.
public List<RestaurantDetailDto> GetRestaurant(int regionid, decimal latitude, decimal longitude, OrderType orderType)
{}
This method will give the restauants around me. But if I want to systematize this list with distance, I must join my Restaurant table and the function. Here is my query.
var query = from restaurant in context.StRestaurant
join distance in context.CalculateDistanceTable(latitude, longitude) on restaurant.PKRestaurantId equals distance.PkKeyId
where restaurant.FKRegionId == regionid
select new
{
Restaurant = restaurant,
DistanceTable = distance,
};
And then I am checking the orderType,
switch (orderType)
{
case OrderType.Distance:
query = query.OrderBy(x => x.DistanceTable.Distance);
break;
// and the anothers
}
Lastly, I am trying to take this list as;
var queryResult = query.ToList();
All the time I took this error :
The nested query does not have the appropriate keys.
I also try the above query but it return with the same error :s
var query = context.StRestaurant.Where(x => x.FKRegionId == regionid && x.IsActive).Join(
context.CalculateDistanceTable(latitude, longitude),
restaurant => restaurant.PKRestaurantId,
result => result.PkKeyId,
(restaurant, result) => new
{
Restaurant = restaurant,
MinumumPackagePrice = restaurant.StRestaurantRegionRelation.FirstOrDefault(x => x.FKRestaurantId == restaurant.PKRestaurantId).MinumumPackageCharge,
DistanceTable = result,
RestaurantImage = restaurant.StRestaurantImage.Where(x => x.IsDefault && x.FKRestaurantId == restaurant.PKRestaurantId),
}
);
Please help!!
I've seen this before when doing an .Include() on the result. I imagine your projection (in the second example) might be doing this internally. Could you add this to the first part?
In this case, I've had to add the .Include() on the source table:
from a in context.A.Include("relationship")
join b in context.MyFunction(...)
...
There are a some things that you can try here. First, rewrite your SQL function so that it has a primary key:
CREATE FUNCTION CalculateDistanceTable
(
-- Add the parameters for the function here
#lat1Degrees float,
#lon1Degrees float
)
RETURNS
#RestaurantDistances TABLE
(
-- Add the column definitions for the TABLE variable here
PkKeyId int NOT NULL primary key,
Distance float NOT NULL
)
AS
BEGIN
INSERT INTO #RestaurantDistances
SELECT dbo.GeoCalculateDistance(#lat1Degrees, #lon1Degrees, Latitude, Longitude) AS Distance, PKRestaurantId AS PkKeyId
FROM StRestaurant
RETURN
END
GO
Also, you can try changing your LINQ join to use anonymous types to perform the join.
var query = from restaurant in context.StRestaurant
join distance in context.CalculateDistanceTable(latitude, longitude) on new { Key = restaurant.PKRestaurantId } equals new { Key = distance.PkKeyId }
where restaurant.FKRegionId == regionid
select new
{
Restaurant = restaurant,
DistanceTable = distance,
};
If neither one of these helps let me know and I'll try to update this answer as appropriate.