I have the following SQL query, which works:
await sequelize.query(
"DELETE FROM `table_name` WHERE (?) IN (?)",
{
replacements: ["project_id", projectIds],
type: QueryTypes.DELETE,
}
);
But I also want to use a replacement for table_name like this:
await sequelize.query(
"DELETE FROM (?) WHERE (?) IN (?)",
{
replacements: ["table_name", "project_id", projectIds],
type: QueryTypes.DELETE,
}
);
But this doesn't work and generates an error about SQL syntax. How can I make this work?
You are mixing data value binding and quoting identifiers.
There is ancient issue in the repo: https://github.com/sequelize/sequelize/issues/4494, which sounds like the problem above.
I believe you can create a workaround that respects different sql dialects like this:
const queryInterface = sequelize.getQueryInterface();
const tableName = queryInterface.quoteIdentifier("projects");
const columnName = queryInterface.quoteIdentifier("project_id");
await sequelize.query(`DELETE FROM ${tableName} WHERE ${columnName} IN (?)`, {
replacements: [project_ids],
type: QueryTypes.DELETE,
});
Assuming you are using sequelize 6.x.
Related
Hi I have the following statement that I execute using node-oracle
await connection.execute(`SELECT * FROM TABLE WHERE NAME LIKE '%And%'`)
But now I want to bind a parameter instead of using a hard coded value
const queryText = 'And';
await connection.execute(`SELECT * FROM TABLE WHERE NAME LIKE '%:queryText%'`, {queryText});
it throws Error: ORA-01036: illegal variable name/number
What is the correct way of binding a parameter here, since the documentation doesn't cover this situation?
Try with the following:
const queryText = 'And';
await connection.execute(
"SELECT * FROM TABLE WHERE NAME LIKE :queryText",
{
queryText: { dir: oracledb.BIND_IN, val: '%'+ queryText +'%', type: oracledb.STRING }
});
Use string concatenation:
SELECT * FROM TABLE WHERE NAME LIKE '%' || :queryText || '%'
Here is a working example.
let queryText = "John"
let sql = "SELECT * FROM TABLE WHERE NAME LIKE :queryText"
let binds = {queryTarget: {dir: oracledb.BIND_IN, val: queryText, type: oracledb.STRING}}
let result = await connection.execute(sql, binds, options)
Do not add '%' like the other people suggested.
I am using sqlite database in Flutter. with provide and sqlite libraries. I want to get ordered list of String in the database when I get the list from sqlite. How can I achieve this? Thank you for your response!
You can use orderBy variable inside query method like this:
Future<List<SingleShiftModel>> getShiftModelsForParticularGroup(
String groupId) async {
Database db = await database;
final List<Map<String, dynamic>> maps = await db.query(
allShiftsTableName,
where: 'parentId = ?',
orderBy: "date ASC", // here you can add your custom order exactly like sqlite but EXCLUDE `ORDER BY`.
whereArgs: [groupId],
);
return List.generate(
maps.length,
(i) => SingleShiftModel.toShiftModelObject(maps[i]),
);
}
Still trying to get familiar with scalikejdbc. What is the simplest way to just use sql syntax to send a query using scalike jdbc into a table to get max date? Something really simple like the below works fine but gives me an error when I try to add max around the column.
val maxDate: Option[String] = DB readOnly { implicit session =>
sql"select <column> from <table>"
.map(rs => rs.string("<column")).first.apply()
}
this does not work:
val maxDate: Option[String] = DB readOnly { implicit session =>
sql"select max(<column>) from <table>"
.map(rs => rs.string("<column")).first.apply()
}
error:
Failed to retrieve value because The column name not found.. If you're using SQLInterpolation,...
I expect this happens because column max(MyColumn) does not have name "MyColumn" by default. You may try something like this instead
val maxDate: Option[String] = DB readOnly { implicit session =>
sql"select max(MyColumn) as MyColumn_max from MyTable"
.map(rs => rs.string("MyColumn_max")).first.apply()
}
I have an upsert query in PostgreSQL like:
INSERT INTO table
(id, name)
values
(1, 'Gabbar')
ON CONFLICT (id) DO UPDATE SET
name = 'Gabbar'
WHERE
table.id = 1
I need to use knex to this upsert query. How to go about this?
So I solved this using the following suggestion from Dotnil's answer on Knex Issues Page:
var data = {id: 1, name: 'Gabbar'};
var insert = knex('table').insert(data);
var dataClone = {id: 1, name: 'Gabbar'};
delete dataClone.id;
var update = knex('table').update(dataClone).whereRaw('table.id = ' + data.id);
var query = `${ insert.toString() } ON CONFLICT (id) DO UPDATE SET ${ update.toString().replace(/^update\s.*\sset\s/i, '') }`;
return knex.raw(query)
.then(function(dbRes){
// stuff
});
Hope this helps someone.
As of knex#v0.21.10+ a new method onConflict was introduced.
Official documentation says:
Implemented for the PostgreSQL, MySQL, and SQLite databases. A
modifier for insert queries that specifies alternative behaviour in
the case of a conflict. A conflict occurs when a table has a PRIMARY
KEY or a UNIQUE index on a column (or a composite index on a set of
columns) and a row being inserted has the same value as a row which
already exists in the table in those column(s). The default behaviour
in case of conflict is to raise an error and abort the query. Using
this method you can change this behaviour to either silently ignore
the error by using .onConflict().ignore() or to update the existing
row with new data (perform an "UPSERT") by using
.onConflict().merge().
So in your case, the implementation would be:
knex('table')
.insert({
id: id,
name: name
})
.onConflict('id')
.merge()
I've created a function for doing this and described it on the knex github issues page (along with some of the gotchas for dealing with composite unique indices).
const upsert = (params) => {
const {table, object, constraint} = params;
const insert = knex(table).insert(object);
const update = knex.queryBuilder().update(object);
return knex.raw(`? ON CONFLICT ${constraint} DO ? returning *`, [insert, update]).get('rows').get(0);
};
Example usage:
const objToUpsert = {a:1, b:2, c:3}
upsert({
table: 'test',
object: objToUpsert,
constraint: '(a, b)',
})
A note about composite nullable indices
If you have a composite index (a,b) and b is nullable, then values (1, NULL) and (1, NULL) are considered mutually unique by Postgres (I don't get it either).
Yet another approach I could think of!
exports.upsert = (t, tableName, columnsToRetain, conflictOn) => {
const insert = knex(tableName)
.insert(t)
.toString();
const update = knex(tableName)
.update(t)
.toString();
const keepValues = columnsToRetain.map((c) => `"${c}"=${tableName}."${c}"`).join(',');
const conflictColumns = conflictOn.map((c) => `"${c.toString()}"`).join(',');
let insertOrUpdateQuery = `${insert} ON CONFLICT( ${conflictColumns}) DO ${update}`;
insertOrUpdateQuery = keepValues ? `${insertOrUpdateQuery}, ${keepValues}` : insertOrUpdateQuery;
insertOrUpdateQuery = insertOrUpdateQuery.replace(`update "${tableName}"`, 'update');
insertOrUpdateQuery = insertOrUpdateQuery.replace(`"${tableName}"`, tableName);
return Promise.resolve(knex.raw(insertOrUpdateQuery));
};
very simple.
Adding onto Dorad's answer, you can choose specific columns to upsert using merge keyword.
knex('table')
.insert({
id: id,
name: name
})
.onConflict('id')
.merge(['name']); // put column names inside an array which you want to merge.
I used to name my parameters in my SQL query when preparing it for practical reasons like in php with PDO.
So can I use named parameters with node-postgres module?
For now, I saw many examples and docs on internet showing queries like so:
client.query("SELECT * FROM foo WHERE id = $1 AND color = $2", [22, 'blue']);
But is this also correct?
client.query("SELECT * FROM foo WHERE id = :id AND color = :color", {id: 22, color: 'blue'});
or this
client.query("SELECT * FROM foo WHERE id = ? AND color = ?", [22, 'blue']);
I'm asking this because of the numbered parameter $n that doesn't help me in the case of queries built dynamically.
There is a library for what you are trying to do. Here's how:
var sql = require('yesql').pg
client.query(sql("SELECT * FROM foo WHERE id = :id AND color = :color")({id: 22, color: 'blue'}));
QueryConvert to the rescue. It will take a parameterized sql string and an object and converts it to pg conforming query config.
type QueryReducerArray = [string, any[], number];
export function queryConvert(parameterizedSql: string, params: Dict<any>) {
const [text, values] = Object.entries(params).reduce(
([sql, array, index], [key, value]) => [sql.replace(`:${key}`, `$${index}`), [...array, value], index + 1] as QueryReducerArray,
[parameterizedSql, [], 1] as QueryReducerArray
);
return { text, values };
}
Usage would be as follows:
client.query(queryConvert("SELECT * FROM foo WHERE id = :id AND color = :color", {id: 22, color: 'blue'}));
Not exactly what the OP is asking for. But you could also use:
import SQL from 'sql-template-strings';
client.query(SQL`SELECT * FROM unicorn WHERE color = ${colorName}`)
It uses tag functions in combination with template literals to embed the values
I have been working with nodejs and postgres. I usually execute queries like this:
client.query("DELETE FROM vehiculo WHERE vehiculo_id= $1", [id], function (err, result){ //Delete a record in de db
if(err){
client.end();//Close de data base conection
//Error code here
}
else{
client.end();
//Some code here
}
});