PostgreSQL multi INSERT...RETURNING with multiple columns - sql

I`m building a database with Postgres 9.3 as a backend, having 3 tables:
table1 (user_id, username, name, surname, emp_date)
table2 (pass_id, user_id, password)
table3 (user_dt_id, user_id, adress, city, phone)
As can be seen table2 and table3 are child tables of table1.
I can extract the user_id of a newly inserted row in table1 (parent):
INSERT INTO "table1" (default,'johnee','john','smith',default) RETURNING userid;
I need to insert the newly extracted id (from table1) into user_id columns of table2 and table3 along with other data unique for those tables. Basically 3 X INSERT ...
How do I do that?

Use data-modifying CTEs to chain your three INSERTs. Something like this:
WITH ins1 AS (
INSERT INTO table1 (username, name, surname)
VALUES ('johnee','john','smith')
RETURNING user_id
)
, ins2 AS (
INSERT INTO table2 (user_id, password)
SELECT ins1.user_id, 'secret'
FROM ins1 -- nothing to return here
)
INSERT INTO table3 (user_id, adress, city, phone)
SELECT ins1.user_id, ...
FROM ins1
RETURNING user_id;
It's typically best to add a column definition list for INSERTs (except for special cases). Else, if the table structure changes, your code might break in surprising ways.
I omitted columns where you would just enter DEFAULT. Defaults are inserted automatically. Shorter, same result.
The final, optional RETURNING returns the resulting user_id - obviously from a sequence or some other default. It's actually the user_id from table3, but that's the same unless you have some triggers or other magic interfering.
More about data-modifying (a.k.a. "writable") CTEs:
Are SELECT type queries the only type that can be nested?

Insert master-detail using function
CREATE OR REPLACE FUNCTION apps.usp_bazar_list_insert_1(
ip_userid character varying,
ip_mobileno character varying,
ip_bazarlisttext text,
data_details text)
RETURNS TABLE(id integer)
LANGUAGE 'plpgsql'
COST 100
VOLATILE
ROWS 1000
AS $BODY$
DECLARE master_id integer;
BEGIN
insert into apps.bazar_list(action_by,mobile_no,bazarlisttext,action_date)
select ip_userId,ip_mobileNo,ip_bazarListText,now() returning list_id into master_id;
insert into apps.bazar_list_images(list_id,action_date,image_name)
select master_id,now(),image_name from json_populate_recordset(NULL::apps.bazar_list_images,
data_details::json);
RETURN QUERY
select 1;
END;
$BODY$;

Related

Multiple Queries into 1 Query doing counts

Ok, trying to make this make sense and see if I can do this. I have multiple queries stored in SQL Server 2012 that I can run individually to get actual results. Each of these queries connect to a multitude of tables. What I want to do is take all of these queries and put them into a single query to get counts in one master list.
So for example. I have a query that looks for all records that have no email addresses. The next query looks for all records missing a phone number in a set field. The next query looks for records missing a filled field.
Each of these queries pull back results and I can run them one at a time. I want to set myself up a single query I can run to give me counts on each in a single results list.
I started doing a Union statement and put two of the query codes into it. The results came up like this:
NoEmail NoPhone
NULL 24486
74596 NULL
What I would like this to look like is this:
NoEmail NoPhone
74596 24486
Any ideas on how to do this?
I hope this is enough info and if not, let me know and I'll get you more.
Thanks.
You can do it like this:
Set up (for demonstration purposes only, so you can see the data I'm using):
create table t1 (
id numeric,
email varchar(20)
);
insert into t1 values (1,'test#example.com');
insert into t1 (id) values (2);
insert into t1 (id) values (3);
create table t2 (
id numeric,
phone varchar(20)
);
insert into t2 values (1,'123-456-7890');
insert into t2 (id) values (2);
insert into t2 (id) values (3);
insert into t2 (id) values (4);
insert into t2 (id) values (5);
Query:
select
(select count(id) from t1 where email is null) as NoEmail,
(select count(id) from t2 where phone is null) as NoPhone;
Result:
NoEmail NoPhone
2 4
You can add as many additional queries on to the end of the query as you'd like:
select
(select count(id) from t1 where email is null) as NoEmail,
(select count(id) from t2 where phone is null) as NoPhone,
(select ...) as anotherCol,
(select ...) as yetAnotherCol;

Return value cross join

I have two tables, one is a table #1 contains user information, email, password, etc..
the other table #2 contains item information
when I do a insert into table #2, and then use the returning statement, to gather what was inserted (returning auto values as well as other information), I also need to return information from table #1.
(excuse the syntax)
example:
insert into table #1(item,user) values('this item','the user')
returning *, select * from table 2 where table #1.user = table #2.user)
in other words, after the insert I need to return the values inserted, as well as the information about the user who inserted the data.
is this possible to do?
the only thing I came up with is using a whole bunch of subquery statements in the returning clause. there has to be a better way.
I suggest a data-modifying CTE (Postgres 9.1 or later):
WITH ins AS (
INSERT INTO tbl1(item, usr)
VALUES('this item', 'the user')
RETURNING usr
)
SELECT t2.*
FROM ins
JOIN tbl2 t2 USING (usr)
Working with the column name usr instead of user, which is a reserved word.
Use a subquery.
Simple demo: http://sqlfiddle.com/#!15/bcc0d/3
insert into table2( userid, some_column )
values( 2, 'some data' )
returning
userid,
some_column,
( SELECT username FROM table1
WHERE table1.userid = table2.userid
);

Insert data in multiple tables at a time with repeated values

I have to insert data into first and second table directly. But the third table which I received data as array and inserted into 3rd table as same.
In my 3rd table values will be repeated. Ex:
values:
{name=ff,age=45,empid=23,desig=se,offid=1,details=kk,offid=2,details=aa,offid=3,details=bb,offid=4,details=cc}
So using 2nd table userid as same for all the offid, but details and other columns are different
#My issue is i will get single hit but i need to iterate for 3rd table.
with first_insert as (
insert into sample(name,age)
values(?,?)
RETURNING id
),
second_insert as (
insert into sample1(empid,desig)
values((select id from first_insert),?)
RETURNING userid
)
insert into sample2(offid,details)
values((select userid from second_insert),?)
Is this available or possible in PostgreSQL?
Yes, absolutely possible.
You can join rows from CTEs to VALUES expressions to combine them for a new INSERT in a data-modifying CTE. Something like this:
WITH first_insert AS (
INSERT INTO sample(name,age)
VALUES (?,?)
RETURNING id
)
, second_insert AS (
INSERT INTO sample1(empid, desig, colx)
SELECT i1.id, v.desig, v.colx
FROM first_insert i1
, (VALUES(?,?)) AS v(desig, colx)
RETURNING userid
)
INSERT INTO sample2(offid, details, col2, ...)
SELECT i2.userid, v.details, ...
FROM second_insert i2
, (VALUES (?,?, ...)) AS v(details, col2, ...);

Return id if a row exists, INSERT otherwise

I'm writing a function in node.js to query a PostgreSQL table.
If the row exists, I want to return the id column from the row.
If it doesn't exist, I want to insert it and return the id (insert into ... returning id).
I've been trying variations of case and if else statements and can't seem to get it to work.
A solution in a single SQL statement. Requires PostgreSQL 8.4 or later though.
Consider the following demo:
Test setup:
CREATE TEMP TABLE tbl (
id serial PRIMARY KEY
,txt text UNIQUE -- obviously there is unique column (or set of columns)
);
INSERT INTO tbl(txt) VALUES ('one'), ('two');
INSERT / SELECT command:
WITH v AS (SELECT 'three'::text AS txt)
,s AS (SELECT id FROM tbl JOIN v USING (txt))
,i AS (
INSERT INTO tbl (txt)
SELECT txt
FROM v
WHERE NOT EXISTS (SELECT * FROM s)
RETURNING id
)
SELECT id, 'i'::text AS src FROM i
UNION ALL
SELECT id, 's' FROM s;
The first CTE v is not strictly necessary, but achieves that you have to enter your values only once.
The second CTE s selects the id from tbl if the "row" exists.
The third CTE i inserts the "row" into tbl if (and only if) it does not exist, returning id.
The final SELECT returns the id. I added a column src indicating the "source" - whether the "row" pre-existed and id comes from a SELECT, or the "row" was new and so is the id.
This version should be as fast as possible as it does not need an additional SELECT from tbl and uses the CTEs instead.
To make this safe against possible race conditions in a multi-user environment:
Also for updated techniques using the new UPSERT in Postgres 9.5 or later:
Is SELECT or INSERT in a function prone to race conditions?
I would suggest doing the checking on the database side and just returning the id to nodejs.
Example:
CREATE OR REPLACE FUNCTION foo(p_param1 tableFoo.attr1%TYPE, p_param2 tableFoo.attr1%TYPE) RETURNS tableFoo.id%TYPE AS $$
DECLARE
v_id tableFoo.pk%TYPE;
BEGIN
SELECT id
INTO v_id
FROM tableFoo
WHERE attr1 = p_param1
AND attr2 = p_param2;
IF v_id IS NULL THEN
INSERT INTO tableFoo(id, attr1, attr2) VALUES (DEFAULT, p_param1, p_param2)
RETURNING id INTO v_id;
END IF;
RETURN v_id:
END;
$$ LANGUAGE plpgsql;
And than on the Node.js-side (i'm using node-postgres in this example):
var pg = require('pg');
pg.connect('someConnectionString', function(connErr, client){
//do some errorchecking here
client.query('SELECT id FROM foo($1, $2);', ['foo', 'bar'], function(queryErr, result){
//errorchecking
var id = result.rows[0].id;
};
});
Something like this, if you are on PostgreSQL 9.1
with test_insert as (
insert into foo (id, col1, col2)
select 42, 'Foo', 'Bar'
where not exists (select * from foo where id = 42)
returning foo.id, foo.col1, foo.col2
)
select id, col1, col2
from test_insert
union
select id, col1, col2
from foo
where id = 42;
It's a bit longish and you need to repeat the id to test for several times, but I can't think of a different solution that involves a single SQL statement.
If a row with id=42 exists, the writeable CTE will not insert anything and thus the existing row will be returned by the second union part.
When testing this I actually thought the new row would be returned twice (therefor a union not a union all) but it turns out that the result of the second select statement is actually evaluated before the whole statement is run and it does not see the newly inserted row. So in case a new row is inserted, it will be taken from the "returning" part.
create table t (
id serial primary key,
a integer
)
;
insert into t (a)
select 2
from (
select count(*) as s
from t
where a = 2
) s
where s.s = 0
;
select id
from t
where a = 2
;

Move SQL data from one table to another

I was wondering if it is possible to move all rows of data from one table to another, that match a certain query?
For example, I need to move all table rows from Table1 to Table2 where their username = 'X' and password = 'X', so that they will no longer appear in Table1.
I'm using SQL Server 2008 Management Studio.
Should be possible using two statements within one transaction, an insert and a delete:
BEGIN TRANSACTION;
INSERT INTO Table2 (<columns>)
SELECT <columns>
FROM Table1
WHERE <condition>;
DELETE FROM Table1
WHERE <condition>;
COMMIT;
This is the simplest form. If you have to worry about new matching records being inserted into table1 between the two statements, you can add an and exists <in table2>.
This is an ancient post, sorry, but I only came across it now and I wanted to give my solution to whoever might stumble upon this one day.
As some have mentioned, performing an INSERT and then a DELETE might lead to integrity issues, so perhaps a way to get around it, and to perform everything neatly in a single statement, is to take advantage of the [deleted] temporary table.
DELETE FROM [source]
OUTPUT [deleted].<column_list>
INTO [destination] (<column_list>)
All these answers run the same query for the INSERT and DELETE. As mentioned previously, this risks the DELETE picking up records inserted between statements and could be slow if the query is complex (although clever engines "should" make the second call fast).
The correct way (assuming the INSERT is into a fresh table) is to do the DELETE against table1 using the key field of table2.
The delete should be:
DELETE FROM tbl_OldTableName WHERE id in (SELECT id FROM tbl_NewTableName)
Excuse my syntax, I'm jumping between engines but you get the idea.
A cleaner representation of what some other answers have hinted at:
DELETE sourceTable
OUTPUT DELETED.*
INTO destTable (Comma, separated, list, of, columns)
WHERE <conditions (if any)>
Yes it is. First INSERT + SELECT and then DELETE orginals.
INSERT INTO Table2 (UserName,Password)
SELECT UserName,Password FROM Table1 WHERE UserName='X' AND Password='X'
then delete orginals
DELETE FROM Table1 WHERE UserName='X' AND Password='X'
you may want to preserve UserID or someother primary key, then you can use IDENTITY INSERT to preserve the key.
see more on SET IDENTITY_INSERT on MSDN
You should be able to with a subquery in the INSERT statement.
INSERT INTO table1(column1, column2) SELECT column1, column2 FROM table2 WHERE ...;
followed by deleting from table1.
Remember to run it as a single transaction so that if anything goes wrong you can roll the entire operation back.
Use this single sql statement which is safe no need of commit/rollback with multiple statements.
INSERT Table2 (
username,password
) SELECT username,password
FROM (
DELETE Table1
OUTPUT
DELETED.username,
DELETED.password
WHERE username = 'X' and password = 'X'
) AS RowsToMove ;
Works on SQL server make appropriate changes for MySql
Try this
INSERT INTO TABLE2 (Cols...) SELECT Cols... FROM TABLE1 WHERE Criteria
Then
DELETE FROM TABLE1 WHERE Criteria
You could try this:
SELECT * INTO tbl_NewTableName
FROM tbl_OldTableName
WHERE Condition1=#Condition1Value
Then run a simple delete:
DELETE FROM tbl_OldTableName
WHERE Condition1=#Condition1Value
You may use "Logical Partitioning" to switch data between tables:
By updating the Partition Column, data will be automatically moved to the other table:
here is the sample:
CREATE TABLE TBL_Part1
(id INT NOT NULL,
val VARCHAR(10) NULL,
PartitionColumn VARCHAR(10) CONSTRAINT CK_Part1 CHECK(PartitionColumn = 'TBL_Part1'),
CONSTRAINT TBL_Part1_PK PRIMARY KEY(PartitionColumn, id)
);
CREATE TABLE TBL_Part2
(id INT NOT NULL,
val VARCHAR(10) NULL,
PartitionColumn VARCHAR(10) CONSTRAINT CK_Part2 CHECK(PartitionColumn = 'TBL_Part2'),
CONSTRAINT TBL_Part2_PK PRIMARY KEY(PartitionColumn, id)
);
GO
CREATE VIEW TBL(id, val, PartitionColumn)
WITH SCHEMABINDING
AS
SELECT id, val, PartitionColumn FROM dbo.TBL_Part1
UNION ALL
SELECT id, val, PartitionColumn FROM dbo.TBL_Part2;
GO
--Insert sample to TBL ( will be inserted to Part1 )
INSERT INTO TBL
VALUES(1, 'rec1', 'TBL_Part1');
INSERT INTO TBL
VALUES(2, 'rec2', 'TBL_Part1');
GO
--Query sub table to verify
SELECT * FROM TBL_Part1
GO
--move the data to table TBL_Part2 by Logical Partition switching technique
UPDATE TBL
SET
PartitionColumn = 'TBL_Part2';
GO
--Query sub table to verify
SELECT * FROM TBL_Part2
Here is how do it with single statement
WITH deleted_rows AS (
DELETE FROM source_table WHERE id = 1
RETURNING *
)
INSERT INTO destination_table
SELECT * FROM deleted_rows;
EXAMPLE:
postgres=# select * from test1 ;
id | name
----+--------
1 | yogesh
2 | Raunak
3 | Varun
(3 rows)
postgres=# select * from test2;
id | name
----+------
(0 rows)
postgres=# WITH deleted_rows AS (
postgres(# DELETE FROM test1 WHERE id = 1
postgres(# RETURNING *
postgres(# )
postgres-# INSERT INTO test2
postgres-# SELECT * FROM deleted_rows;
INSERT 0 1
postgres=# select * from test2;
id | name
----+--------
1 | yogesh
(1 row)
postgres=# select * from test1;
id | name
----+--------
2 | Raunak
3 | Varun
If the two tables use the same ID or have a common UNIQUE key:
1) Insert the selected record in table 2
INSERT INTO table2 SELECT * FROM table1 WHERE (conditions)
2) delete the selected record from table1 if presents in table2
DELETE FROM table1 as A, table2 as B WHERE (A.conditions) AND (A.ID = B.ID)
It will create a table and copy all the data from old table to new table
SELECT * INTO event_log_temp FROM event_log
And you can clear the old table data.
DELETE FROM event_log
For some scenarios, it might be the easiest to script out Table1, rename the existing Table1 to Table2 and run the script to recreate Table1.