Postgresql - INSERT INTO based on multiple SELECT - sql

I intend to write a INSERT INTO request in Postgresql based on several SELECT but didn't succeed.
I have one table containing data I select (srctab), and another one where I insert data (dsttab).
Here is what I run :
INSERT INTO dsttab (dstfld1, dstfld2) WITH
t1 AS (
SELECT srcfld1
FROM srctab
WHERE srcfld3 ='foo'
),
t2 AS (
SELECT srcfld5
FROM srctab
WHERE srcfld6 ='bar'
) select srcfld1, srcfld5 from srctab;
Could you please help to make this work ? Thank you !

Note: I'm guessing about what you want to do here. My guess is that you want to insert a single row with the values from the CTEs (That's the WITH block.). Your query as written would insert a row into dsttab for every row in srctab, if it were valid syntax.
You don't really need a CTE here. CTEs should really only be used when you need to reference the same subquery more than once; that's what they exist for. (Occasionally, you can somewhat abuse them to control certain performance aspects in PostgreSQL, but that isn't the case in other DBs and is something to be avoided when possible anyway.)
Just put your queries in line:
INSERT INTO dsttab (dstfld1, dstfld2)
VALUES (
(SELECT srcfld1
FROM srctab
WHERE srcfld3 ='foo'),
(SELECT srcfld5
FROM srctab
WHERE srcfld6 ='bar')
);
The key point here is to surround the subqueries with parentheses.

Related

Is there any SQL query character limit while executing it by using the JDBC driver [duplicate]

I'm using the following code:
SELECT * FROM table
WHERE Col IN (123,123,222,....)
However, if I put more than ~3000 numbers in the IN clause, SQL throws an error.
Does anyone know if there's a size limit or anything similar?!!
Depending on the database engine you are using, there can be limits on the length of an instruction.
SQL Server has a very large limit:
http://msdn.microsoft.com/en-us/library/ms143432.aspx
ORACLE has a very easy to reach limit on the other side.
So, for large IN clauses, it's better to create a temp table, insert the values and do a JOIN. It works faster also.
There is a limit, but you can split your values into separate blocks of in()
Select *
From table
Where Col IN (123,123,222,....)
or Col IN (456,878,888,....)
Parameterize the query and pass the ids in using a Table Valued Parameter.
For example, define the following type:
CREATE TYPE IdTable AS TABLE (Id INT NOT NULL PRIMARY KEY)
Along with the following stored procedure:
CREATE PROCEDURE sp__Procedure_Name
#OrderIDs IdTable READONLY,
AS
SELECT *
FROM table
WHERE Col IN (SELECT Id FROM #OrderIDs)
Why not do a where IN a sub-select...
Pre-query into a temp table or something...
CREATE TABLE SomeTempTable AS
SELECT YourColumn
FROM SomeTable
WHERE UserPickedMultipleRecordsFromSomeListOrSomething
then...
SELECT * FROM OtherTable
WHERE YourColumn IN ( SELECT YourColumn FROM SomeTempTable )
Depending on your version, use a table valued parameter in 2008, or some approach described here:
Arrays and Lists in SQL Server 2005
For MS SQL 2016, passing ints into the in, it looks like it can handle close to 38,000 records.
select * from user where userId in (1,2,3,etc)
I solved this by simply using ranges
WHERE Col >= 123 AND Col <= 10000
then removed unwanted records in the specified range by looping in the application code. It worked well for me because I was looping the record anyway and ignoring couple of thousand records didn't make any difference.
Of course, this is not a universal solution but it could work for situation if most values within min and max are required.
You did not specify the database engine in question; in Oracle, an option is to use tuples like this:
SELECT * FROM table
WHERE (Col, 1) IN ((123,1),(123,1),(222,1),....)
This ugly hack only works in Oracle SQL, see https://asktom.oracle.com/pls/asktom/asktom.search?tag=limit-and-conversion-very-long-in-list-where-x-in#9538075800346844400
However, a much better option is to use stored procedures and pass the values as an array.
You can use tuples like this:
SELECT * FROM table
WHERE (Col, 1) IN ((123,1),(123,1),(222,1),....)
There are no restrictions on number of these. It compares pairs.

what is the maximum value we can use with IN operator in sql [duplicate]

I'm using the following code:
SELECT * FROM table
WHERE Col IN (123,123,222,....)
However, if I put more than ~3000 numbers in the IN clause, SQL throws an error.
Does anyone know if there's a size limit or anything similar?!!
Depending on the database engine you are using, there can be limits on the length of an instruction.
SQL Server has a very large limit:
http://msdn.microsoft.com/en-us/library/ms143432.aspx
ORACLE has a very easy to reach limit on the other side.
So, for large IN clauses, it's better to create a temp table, insert the values and do a JOIN. It works faster also.
There is a limit, but you can split your values into separate blocks of in()
Select *
From table
Where Col IN (123,123,222,....)
or Col IN (456,878,888,....)
Parameterize the query and pass the ids in using a Table Valued Parameter.
For example, define the following type:
CREATE TYPE IdTable AS TABLE (Id INT NOT NULL PRIMARY KEY)
Along with the following stored procedure:
CREATE PROCEDURE sp__Procedure_Name
#OrderIDs IdTable READONLY,
AS
SELECT *
FROM table
WHERE Col IN (SELECT Id FROM #OrderIDs)
Why not do a where IN a sub-select...
Pre-query into a temp table or something...
CREATE TABLE SomeTempTable AS
SELECT YourColumn
FROM SomeTable
WHERE UserPickedMultipleRecordsFromSomeListOrSomething
then...
SELECT * FROM OtherTable
WHERE YourColumn IN ( SELECT YourColumn FROM SomeTempTable )
Depending on your version, use a table valued parameter in 2008, or some approach described here:
Arrays and Lists in SQL Server 2005
For MS SQL 2016, passing ints into the in, it looks like it can handle close to 38,000 records.
select * from user where userId in (1,2,3,etc)
I solved this by simply using ranges
WHERE Col >= 123 AND Col <= 10000
then removed unwanted records in the specified range by looping in the application code. It worked well for me because I was looping the record anyway and ignoring couple of thousand records didn't make any difference.
Of course, this is not a universal solution but it could work for situation if most values within min and max are required.
You did not specify the database engine in question; in Oracle, an option is to use tuples like this:
SELECT * FROM table
WHERE (Col, 1) IN ((123,1),(123,1),(222,1),....)
This ugly hack only works in Oracle SQL, see https://asktom.oracle.com/pls/asktom/asktom.search?tag=limit-and-conversion-very-long-in-list-where-x-in#9538075800346844400
However, a much better option is to use stored procedures and pass the values as an array.
You can use tuples like this:
SELECT * FROM table
WHERE (Col, 1) IN ((123,1),(123,1),(222,1),....)
There are no restrictions on number of these. It compares pairs.

Copy data from one table to another without duplicates based on more then one column comparision

I am stuck in the following query. This was working properly on mySQL but it gives error on MSSQL-2005. The main purpose of the query is to copy data from one table to another without duplicates based on multiple columns comparison from both tables.
I can do this to compare one column for duplication, but I can't do when I compare more then one column for duplication.
Here is my query.
INSERT INTO eBayStockTaking (OrderLineItemID,Qty,SKU,SubscriberID,eBayUserID)
SELECT OrderLineItemID,Qty,SKU,SubscriberID,eBayUserID
FROM tempEBayStockTaking WHERE (OrderLineItemID,SubscriberID,eBayUserID)
Not In (SELECT OrderLineItemID,SubscriberID,eBayUserID FROM eBayStockTaking)
Note: I have been through many similar questions but all in vain.
Thanks
Rather try NOT EXISTS
Something like
INSERT INTO eBayStockTaking (OrderLineItemID,Qty,SKU,SubscriberID,eBayUserID)
SELECT OrderLineItemID,
Qty,
SKU,
SubscriberID,
eBayUserID
FROM tempEBayStockTaking t
WHERE Not EXISTS (
SELECT *
FROM eBayStockTaking e
WHERE e.OrderLineItemID = t.OrderLineItemID
AND e.SubscriberID = t.SubscriberID
AND e.eBayUserID = t.eBayUserID)
)
I know MySQL allows Row Subqueries, nut SQL Server does not allow this.

Oracle SQL: How to use more than 1000 items inside an IN clause [duplicate]

This question already has answers here:
SQL IN Clause 1000 item limit
(5 answers)
Closed 8 years ago.
I have an SQL statement where I would like to get data of 1200 ep_codes by making use of IN clause. When I include more than 1000 ep_codes inside IN clause, Oracle says I'm not allowed to do that. To overcome this, I tried to change the SQL code as follows:
SELECT period, ...
FROM my_view
WHERE period = '200912'
...
AND ep_codes IN (...1000 ep_codes...)
OR ep_codes IN (...200 ep_codes...)
The code was executed succesfully but the results are strange (calculation results are fetched for all periods, not just for 200912, which is not what I want). Is it appropriate to do that using OR between IN clauses or should I execute two separate codes as one with 1000 and the other with 200 ep_codes?
Pascal Martin's solution worked perfectly. Thanks all who contributed with valuable suggestions.
The recommended way to handle this in Oracle is to create a Temporary Table, write the values into this, and then join to this. Using dynamically created IN clauses means the query optimizer does a 'hard parse' of every query.
create global temporary table LOOKUP
(
ID NUMBER
) on commit delete rows;
-- Do a batch insert from your application to populate this table
insert into lookup(id) values (?)
-- join to it
select foo from bar where code in (select id from lookup)
Not sure that using so many values in a IN() is that good, actually -- especially for performances.
When you say "the results are strange", maybe this is because a problem with parenthesis ? What if you try this, instead of what you proposed :
SELECT ...
FROM ...
WHERE ...
AND (
ep_codes IN (...1000 ep_codes...)
OR ep_codes IN (...200 ep_codes...)
)
Does it make the results less strange ?
Actually you can use collections/multisets here. You'll need a number table type to store them.
CREATE TYPE NUMBER_TABLE AS TABLE OF NUMBER;
...
SELECT *
FROM my_view
WHERE period MEMBER OF NUMBER_TABLE(1,2,3...10000)
Read more about multisets here:
Seems like it would be a better idea, both for performance and maintainability, to put the codes in a separate table.
SELECT ...
FROM ...
WHERE ...
AND ep_code in (select code from ep_code_table)
could you insert the 1200 ep_code values into a temporary table and then INNER JOIN to that table to filter rows instead?
SELECT a.*
FROM mytable a
INNER JOIN tmp ON (tmp.ep_code = a.ep_code)
WHERE ...

Limit on the WHERE col IN (...) condition

I'm using the following code:
SELECT * FROM table
WHERE Col IN (123,123,222,....)
However, if I put more than ~3000 numbers in the IN clause, SQL throws an error.
Does anyone know if there's a size limit or anything similar?!!
Depending on the database engine you are using, there can be limits on the length of an instruction.
SQL Server has a very large limit:
http://msdn.microsoft.com/en-us/library/ms143432.aspx
ORACLE has a very easy to reach limit on the other side.
So, for large IN clauses, it's better to create a temp table, insert the values and do a JOIN. It works faster also.
There is a limit, but you can split your values into separate blocks of in()
Select *
From table
Where Col IN (123,123,222,....)
or Col IN (456,878,888,....)
Parameterize the query and pass the ids in using a Table Valued Parameter.
For example, define the following type:
CREATE TYPE IdTable AS TABLE (Id INT NOT NULL PRIMARY KEY)
Along with the following stored procedure:
CREATE PROCEDURE sp__Procedure_Name
#OrderIDs IdTable READONLY,
AS
SELECT *
FROM table
WHERE Col IN (SELECT Id FROM #OrderIDs)
Why not do a where IN a sub-select...
Pre-query into a temp table or something...
CREATE TABLE SomeTempTable AS
SELECT YourColumn
FROM SomeTable
WHERE UserPickedMultipleRecordsFromSomeListOrSomething
then...
SELECT * FROM OtherTable
WHERE YourColumn IN ( SELECT YourColumn FROM SomeTempTable )
Depending on your version, use a table valued parameter in 2008, or some approach described here:
Arrays and Lists in SQL Server 2005
For MS SQL 2016, passing ints into the in, it looks like it can handle close to 38,000 records.
select * from user where userId in (1,2,3,etc)
I solved this by simply using ranges
WHERE Col >= 123 AND Col <= 10000
then removed unwanted records in the specified range by looping in the application code. It worked well for me because I was looping the record anyway and ignoring couple of thousand records didn't make any difference.
Of course, this is not a universal solution but it could work for situation if most values within min and max are required.
You did not specify the database engine in question; in Oracle, an option is to use tuples like this:
SELECT * FROM table
WHERE (Col, 1) IN ((123,1),(123,1),(222,1),....)
This ugly hack only works in Oracle SQL, see https://asktom.oracle.com/pls/asktom/asktom.search?tag=limit-and-conversion-very-long-in-list-where-x-in#9538075800346844400
However, a much better option is to use stored procedures and pass the values as an array.
You can use tuples like this:
SELECT * FROM table
WHERE (Col, 1) IN ((123,1),(123,1),(222,1),....)
There are no restrictions on number of these. It compares pairs.