Is this safe for node-postgres? - sql

I'm trying to insert a record into a table called email. I do not have all the values on hand when performing the INSERT. I'd like to retrieve some of those values from a table called provider. I managed to get it to work, but being fairly new I wanted to ask if this is safe from SQL injections?
const newEmailRecord = {
text: 'INSERT INTO email(col1, col2, col3, col4, col5, col6, col7) (SELECT $1, $2, prov_first_name AS col3, prov_email AS col4, $3, $4, $5 FROM provider WHERE prov_id=($6))',
values:[ var1, var2, var3, var4, var5, var6ProvId],
}
await client.query(newEmailRecord);

All the values potentially introduced from outside the application are bound to placeholders. This should be safe from SQL Injection attacks.

Related

Import CSV file and append unique data

I have a table ("Posts") that tracks social media data. I want to import a .csv file that contains the new data.
The .csv file, from an outside service, maintains a rolling three months of data.
I'd like to (1) open the .csv file, (2) identify each line, based on date, that doesn't exist in my "Posts" table, and then (3) import the data for the new line without changing data already in the table.
I've been digging through the forums but am not finding what I need.
For the import step, I'm trying:
DoCmd.TransferText TransferType:=acImportDelim, TableName:="tblPosts", FileName:="C:\Users\[myname]\Desktop\Historical Reports\Posts.csv", HasFieldNames:=True
Unfortunately, I have somewhat different field names than column names. Do I need to skip the first line of the .csv and then build a custom SQL Insert statement?
Consider an SQL query (as the MS Access engine can directly query CSV files) that runs the classic duplicate avoidance queries with NOT IN, NOT EXISTS, LEFT JOIN / IS NULL. Adjust below column names including Date accordingly:
INSERT INTO Posts (col1, col2, col3)
SELECT t.col1, t.col2, t.col3)
FROM [text;database=C:\Users\[myname]\Desktop\Historical Reports].Posts.csv AS t
LEFT JOIN Posts p
ON t.Date = p.Date
WHERE p.Date IS NULL;
INSERT INTO Posts (col1, col2, col3)
SELECT t.col1, t.col2, t.col3)
FROM [text;database=C:\Users\[myname]\Desktop\Historical Reports].Posts.csv AS t
WHERE t.Date NOT IN
(SELECT p.Date FROM Posts p);
INSERT INTO Posts (col1, col2, col3)
SELECT t.col1, t.col2, t.col3)
FROM [text;database=C:\Users\[myname]\Desktop\Historical Reports].Posts.csv AS t
WHERE NOT EXISTS
(SELECT 1 FROM Posts p
WHERE p.Date = t.Date);

Query with IN operator not performing well in SQL Server

I have a query in SQL Server something like this.
select sum (col1)
from TableA
where col2 = ?
and col3 = ?
and col4 = in (?, ?, ?... ?)
TableA has a composite index on (col2, col3, col4).
This query is not performing well when the size is increasing in the list of the IN operator.
Is there a good way to rewrite this query for better performance?
List can grow from 1 to 300 items.
Your index should be pretty good. For this query:
select sum(col1)
from TableA
where col2 = ? and
col3 = ? and
col4 in (?, ?, ?... ?);
The optimal index is (col2, col3, col4, col1) (the last column can be included if you prefer.
For the index to be used, you need to be sure that the types are compatible for the comparisons. Conversions -- and changes in collation -- preclude the use of indexes.

SQL IN by multiple columns in Slick

I want to express something like
WHERE (col1, col2)
IN ((val1a, val2a), (val1b, val2b), ...)
In pseudo-Slick:
val values = Set(("val1a", "val2a"), ("val1b", "val2b"))
// and in filter
(col1, col2) inSetBind values
Is there a way to do that in Slick?

Oracle_How to insert data from slow select query into two tables

I have searched but can't get answer for this (maybe wrong keyword...)
I come to this problem today when I need to create a procedure to calculate data to save to 2 report table in 2 different schemas. Let say those two tables have same structure.
The query to calculate data may take more than 60 seconds (data may or may not change the result of SELECT statemant if run again)
I have two way to insert data to those two table:
Just run insert TWO time with that same select query.
Using a GTT - global temporary table to save calculated data from SELECT query, then INSERT to those two tables using data in that GTT.
I wonder if Oracle will keep cache of result for the SELECT query so that the first way will have better performance then second way (but have longer code, and duplicate code, not synchronized?).
So could anyone confirm and explain the right way to solve this for me? Or a better way of doing this?
Thank you,
Appendix 1:
INSERT INTO report_table (col1, col2, ....)
SELECT .....
FROM .....
--(long query)
;
INSERT INTO center_schema.report_table (col1, col2, ....)
SELECT .....
FROM .....
--same select query as above
;
And 2:
INSERT INTO temp_report_table(col1, col2, ...)
SELECT .....
FROM .....
--(long query)
;
INSERT INTO report_table (col1, col2, ....)
SELECT col1, col2, ....
FROM temp_report_table
;
INSERT INTO center_schema.report_table (col1, col2, ....)
SELECT col1, col2, ....
FROM temp_report_table
;
No, you have a third option - the wonderful multi-insert...
INSERT ALL
INTO report_table (col1, col2, ....)
VALUES (X.col1, X.col2, ...)
INTO center_schema.report_table (col1, col2, ...)
VALUES (X.col1, X.col2, ...)
SELECT col1, col2, ...
FROM your_table X
--(long query)
;
For a detailed info on this nice way of loading multiple tables at once please refer to the respective part of Oracle documentation.

mysql format results as columns not rows

Using MySQL Server, I have a table structure like:
(Table test)
id Name
1 "test1"
2 "test2"
3 "test3"
When I perform the following query:
Select name from test
the results are:
"test1"
"test2"
"test3"
How can I adjust this query so the results are columns, as such:
"test1" "test2" "test3"
Generally speaking, it's better to format your output in the application layer. Imagine that your table has 728003 unique values in the 'name' column. Your result would be unusably wide.
However, if you really want to do this and you have some unifying column to group on, you could do:
SELECT unifying_column, GROUP_CONCAT(DISTINCT name) FROM test
GROUP BY unifying_column;
ps, group_concat can only hold a limited number of characters
Addendum
The only way I could thing to do it with multiple columns would be something like this (and it is a horrible kludge):
SELECT SUBSTRING_INDEX(nlist,',',1) AS col1,
SUBSTRING_INDEX(SUBSTRING_INDEX(nlist,',',2),',',-1) AS col2,
SUBSTRING_INDEX(SUBSTRING_INDEX(nlist,',',3),',',-1) AS col3,
SUBSTRING_INDEX(SUBSTRING_INDEX(nlist,',',4),',',-1) AS col4,
SUBSTRING_INDEX(SUBSTRING_INDEX(nlist,',',5),',',-1) AS col5,
SUBSTRING_INDEX(SUBSTRING_INDEX(nlist,',',6),',',-1) AS col6,
SUBSTRING_INDEX(SUBSTRING_INDEX(nlist,',',7),',',-1) AS col7,
SUBSTRING_INDEX(SUBSTRING_INDEX(nlist,',',8),',',-1) AS col8,
SUBSTRING_INDEX(SUBSTRING_INDEX(nlist,',',9),',',-1) AS col9,
SUBSTRING_INDEX(nlist,',',-1) AS col10
FROM
(SELECT unifying_column, GROUP_CONCAT(DISTINCT name LIMIT 10) AS nlist FROM test
GROUP BY unifying_column) AS subtable;
The above is not tested and may have syntax errors. I'm also not sure what it might do with the extra columns if there are fewer than 10 names.
Why you would need the data to be returned like that from the database?
If you actually need the data in that format it would make more sense to loop through the results and append them to a string to get the data in the format you require, but this should be done using a programming language instead of directly from the database.