I would like to do a batch upsert for a table in postgres. Since prisma doesnt support this in its api, i have to use $executeRaw. I am a little stuck though on how to properly use Prisma.join and Prisma.sql for inserting the data into the template tag.
This post seems to indicate a possible solution: https://github.com/prisma/prisma/discussions/15452#discussioncomment-3737632, however the function signature of Prisma.sql is export declare function sqltag(strings: ReadonlyArray<string>, ...values: RawValue[]): Sql; which indicates it takes two arrays rather than a string.
This is what i have so far:
async function batchUpsertPosts(posts){
await prisma.$executeRaw`INSERT INTO Posts (
feedDomain,
postId,
title,
score,
)
VALUES (), (), ()--... and so on with posts
ON CONFLICT ("feedDomain","postId") DO UPDATE SET score = excluded.score`
}
Ok, I think I managed to figure it out.
async function batchUpsertPosts(posts) {
await prisma.$executeRaw`INSERT INTO "public"."Posts" (
"feedDomain",
"postId",
"title",
"score",
)
VALUES ${Prisma.join(
posts.map(post =>
Prisma.sql`(${post.feedDomain}, ${post.postId}, ${post.title} ,${post.score})`
)
)}
ON CONFLICT ("feedDomain","postId") DO UPDATE SET score = excluded.score`
}
So i just needed to use:
Prisma.sql``
Related
I have the following SuiteQL code generated using the NetSuite: Workbook Export chrome extension:
`
SELECT
BUILTIN_RESULT.TYPE_DATE(TRANSACTION.trandate) AS trandate,
BUILTIN_RESULT.TYPE_STRING(TRANSACTION.tranid) AS tranid,
BUILTIN_RESULT.TYPE_STRING(item.displayname) AS displayname,
BUILTIN_RESULT.TYPE_CURRENCY(BUILTIN.CONSOLIDATE(transactionLine.rate, 'LEDGER', 'DEFAULT', 'DEFAULT', 1, 100, 'DEFAULT'), BUILTIN.CURRENCY(BUILTIN.CONSOLIDATE(transactionLine.rate, 'LEDGER', 'DEFAULT', 'DEFAULT', 1, 100, 'DEFAULT'))) AS rate,
BUILTIN_RESULT.TYPE_FLOAT(transactionLine.quantity) AS quantity
FROM
TRANSACTION,
item,
transactionLine,
(SELECT
PreviousTransactionLink.nextdoc AS nextdoc,
PreviousTransactionLink.nextdoc AS nextdoc_join,
transaction_SUB.name_crit AS name_crit_0
FROM
PreviousTransactionLink,
(SELECT
transaction_0.ID AS ID,
transaction_0.ID AS id_join,
CUSTOMLIST234.name AS name_crit
FROM
TRANSACTION transaction_0,
CUSTOMLIST234
WHERE
transaction_0.custbody1 = CUSTOMLIST234.ID(+)
) transaction_SUB
WHERE
PreviousTransactionLink.previousdoc = transaction_SUB.ID(+)
) PreviousTransactionLink_SUB
WHERE
(((transactionLine.item = item.ID(+) AND TRANSACTION.ID = transactionLine.TRANSACTION) AND TRANSACTION.ID = PreviousTransactionLink_SUB.nextdoc(+)))
AND ((TRANSACTION.TYPE IN ('CustInvc') AND transactionLine.itemtype IN ('InvtPart', 'Kit') AND NVL(transactionLine.mainline, 'F') = ? AND (UPPER(PreviousTransactionLink_SUB.name_crit_0) NOT LIKE ? OR PreviousTransactionLink_SUB.name_crit_0 IS NULL) AND TRUNC(TRANSACTION.trandate) > TO_DATE(?, 'YYYY-MM-DD')))
`
When I try to paste it into Power Automate's API call, I get the following 400 error:
"Invalid search query. Detailed unprocessed description follows. Invalid number of parameters. Expected: 3. Provided: 0."
My call's query was formatted as follows:
`
{
"q": "SELECT BUILTIN_RESULT.TYPE_DATE(TRANSACTION.trandate) AS trandate, BUILTIN_RESULT.TYPE_STRING(TRANSACTION.tranid) AS tranid, BUILTIN_RESULT.TYPE_STRING(item.displayname) AS displayname, BUILTIN_RESULT.TYPE_CURRENCY(BUILTIN.CONSOLIDATE(transactionLine.rate, 'LEDGER', 'DEFAULT', 'DEFAULT', 1, 100, 'DEFAULT'), BUILTIN.CURRENCY(BUILTIN.CONSOLIDATE(transactionLine.rate, 'LEDGER', 'DEFAULT', 'DEFAULT', 1, 100, 'DEFAULT'))) AS rate, BUILTIN_RESULT.TYPE_FLOAT(transactionLine.quantity) AS quantity FROM TRANSACTION, item, transactionLine, (SELECT PreviousTransactionLink.nextdoc AS nextdoc, PreviousTransactionLink.nextdoc AS nextdoc_join, transaction_SUB.name_crit AS name_crit_0 FROM PreviousTransactionLink, (SELECT transaction_0.ID AS ID,transaction_0.ID AS id_join, CUSTOMLIST234.name AS name_crit FROM TRANSACTION transaction_0, CUSTOMLIST234 WHERE transaction_0.custbody1 = CUSTOMLIST234.ID(+) ) transaction_SUB WHERE PreviousTransactionLink.previousdoc = transaction_SUB.ID(+) ) PreviousTransactionLink_SUB WHERE (((transactionLine.item = item.ID(+) AND TRANSACTION.ID = transactionLine.TRANSACTION) AND TRANSACTION.ID = PreviousTransactionLink_SUB.nextdoc(+))) AND ((TRANSACTION.TYPE IN ('CustInvc') AND transactionLine.itemtype IN ('InvtPart', 'Kit') AND NVL(transactionLine.mainline, 'F') = ? AND (UPPER(PreviousTransactionLink_SUB.name_crit_0) NOT LIKE ? OR PreviousTransactionLink_SUB.name_crit_0 IS NULL) AND TRUNC(TRANSACTION.trandate) > TO_DATE(?, 'YYYY-MM-DD')))"
}
`
When I try using SuiteQL in my API calls using a simple query, my API calls work, so I'm pretty sure I'm screwing up the fomrat of the above's json format. I tried the following simple query call and it was succesfull:
`
{
"q": "SELECT email, COUNT(*) as count FROM transaction GROUP BY email"
}
`
I have tried using json beautifier to try to fix my json but I haven't been able to do so successfully.
Below is a pic of the HTTP action I'm using in power Automate to make the query:
HTTP POST Action
For context, I'm an accountant by trade trying to learn how to do some basic coding. Any hint that will help me correctly format the above query will be greatly appreciated. Thanks!
As per my comment, typically, question marks in an SQL statement are deemed as being parameters.
Code based frameworks use them as placeholders for filling the parameters to abstract them from the string itself.
Now, in PowerAutomate and with your question, I think it's just a little bit of seeing the forest for the trees.
The easiest way to populate the question marks is to literally replace them in the string itself with the relevant variable or expression.
So if I took an SQL statement with a parameter like you have ...
SELECT * FROM [dbo].[Test] WHERE Parameter = ?
... I can do the following (this is an EXTREMELY basic example) ...
Using the expressions, you can populate strings within other variables or steps in the flow.
Just be careful when it comes to encapsulating your different parameters with quotes, either single or double. The SQL statement will need them in the cases where values are numbers.
I have a table that looks like this:
What I want to do is convert it to a JSON that has all distinct years as keys, and an array of Make as value for each key. So based on the image, it would be something like this:
{ "1906": ["Reo", "Studebaker"], "1907": ["Auburn", "Cadillac", "Duryea", "Ford"], ... }
Could someone please help me on how to write a query to achieve this? I've never worked with converting SQL -> JSON before, and all I know is to use FOR JSON to convert a query to JSON.
I managed to solve this for whoever needs something similar. However, I think this way is inefficient and I would love for someone to tell me a better method.
I first ran this SQL query:
WITH CTE_Make AS
(
SELECT [Year], [Make]
FROM VehicleInformation
GROUP BY [Year], [Make]
)
SELECT
( SELECT
[Year] AS Year,
STRING_AGG([Make],',') AS MakeList
FOR JSON PATH,
WITHOUT_ARRAY_WRAPPER)
FROM CTE_Make
GROUP BY [Year]
This gave me the JSON in a different format from what I wanted. I stored this in a file "VehicleInformationInit.json", and loaded it in JS as variable vehicleInfoInitJsonFile and ran the following code in JavaScript:
var dict = {};
$.getJSON(vehicleInfoInitJsonFile, function
(result) {
result.forEach(x => {
var makeList = x.MakeList.split(',');
dict[x.Year] = makeList;
})
var newJSON = JSON.stringify(dict);
});
I simply copied the contents of newJSON by attaching a debugger into my required file. However, you can easily use fs and store it in a file if you want to.
I have following snippet of code:
UserDataModel
.find {
UserDataTable.type eq type and (
UserDataTable.userId eq userId
)
}
.limit(count)
.sortedByDescending { it.timestamp }
sortedByDescending is a part of kotlin collections API. The my main concern is: how does exposed lib return top (according to timestamp) count rows from table if select query looks like this and does not contain ORDER BY clause?
SELECT USERDATA.ID, USERDATA.USER_ID, USERDATA.TYPE,
USERDATA.PAYLOAD, USERDATA."TIMESTAMP"
FROM USERDATA
WHERE USERDATA.TYPE = 'testType'
and USERDATA.USER_ID = 'mockUser'
LIMIT 4
And is it possible that sometimes or somehow returned result would be different for the same data?
Really struggled here. Thank you in advance.
You're sorting the results after query has been executed.
You need to use orderBy method as described in docs
UserDataModel
.find {
UserDataTable.type eq type and (UserDataTable.userId eq userId)
}
.limit(count)
.orderBy(UserDataTable.timestamp to SortOrder.DESC)
I want to retrieve all rows matched on multiple partial prase against with a column. My situation can be explained as following raw sql:
SELECT *
FROM TABLENAME
WHERE COLUMN1 LIKE %abc%
OR COLUMN1 LIKE %bcd%
OR COLUMN1 LIKE %def%;
Here, abc, bcd, def are array elements. i.e: array('abc','bcd','def'). Is there any way to write code passing this array to form the above raw sql using cakephp 3?
N.B: I am using mysql as DBMS.
probably you can use Collection to create a proper array, but I think a foreach loop will do the job in the same amount of code. So here is my solution supposing $query stores your Query object
$a = ['abc','bcd','def'];
foreach($a as $value)
{
$or[] = function ($exp, $q) use ($value) {
return $exp->like('column1', '%'.$value.'%');
};
}
$query->where(['or' => $or]);
you could also use orWhere() but I see it will be deprecated in 3.6
I'm building a Node.js app that needs to query a Redshift database (based on postgres 8.0.2) using CTEs. Unfortunately, the SQL query builders I've looked at thus far (node-sql, knex.js and sequelize) don't seem to support common table expressions (CTEs).
I had great success forming common table expressions in Ruby using Jeremy Evans' Sequel gem, which has a with method that takes two arguments to define the tables' aliased name and the dataset reference. I'd like something similar in Node.
Have I missed any obvious contenders for Node.js SQL query builders? From what I can tell these are the four most obvious:
node-sql
nodesql (no postgres support?)
knex.js
sequelize
knex.js now supports WITH clauses:
knex.with('with_alias', (qb) => {
qb.select('*').from('books').where('author', 'Test')
}).select('*').from('with_alias')
Outputs:
with "with_alias" as (select * from "books" where "author" = 'Test') select * from "with_alias"
I was able to use common table expressions (CTEs) with knex.js and it was pretty easy.
Assuming you're using socket.io along with knex.js,
knex-example.js:
function knexExample (io, knex) {
io.on('connection', function (socket) {
var this_cte = knex('this_table').select('this_column');
var that_cte = knex('that_table').select('that_column');
knex.raw('with t1 as (' + this_cte +
'), t2 as (' + that_cte + ')' +
knex.select(['this', 'that'])
.from(['t1', 't2'])
)
.then(function (rows) {
socket.emit('this_that:update', rows);
});
})
}
module.exports = knexExample;
From this issue and this issue I understand that you can use CTEs with Sequelize.
You need to use the raw queries and maybe precise the type, to be sure that Sequelize understands it's a Select query. See first link.
Sample code would be:
sequelize.query(
query, //raw SQL
tableName,
{raw: true, type: Sequelize.QueryTypes.SELECT}
).success(function (rows) {
// ...
})
See here for mode details.
xql.js supports WITH clause from version 1.4.12. A small example:
const xql = require("xql");
const SELECT = xql.SELECT;
return SELECT()
.WITH("with_alias", SELECT().FROM("books").WHERE("author", "=", "Test"))
.FROM("with_alias");
The WITH clause is supported for SELECT, INSERT, UPDATE, DELETE, and compound queries as well.