Insert into … values ( SELECT … FROM … ) in postgresql? - sql

I am working with Postgresql database. I have one database which is - db1 and I have one table inside this database which is App1.
I need to make a select query against this App1 table which is in db1 and then whatever results I am getting back, I need to insert them in App2 table as it is which is in another database db2.
Below is my query which I am running against App1 table which is in db1 -
select col1, col2 from App1 limit 5
Now is there any way I can use Insert statement along with above SELECT statement which can insert into App2 table for me automatically which is in db2?
Something along this line -
Insert into … values ( SELECT … FROM … )
Is this possible to do in Postgresql as both the tables are in different database?

To do this between databases you must use the foreign data wrapper postgres_fdw or use dblink. See the documentation. PostgreSQL doesn't support cross-database SELECT.
Often, if you find yourself wanting to do this, you should be using separate schemas in a single database instead.
BTW, it's generally:
INSERT INTO ... SELECT ...
i.e. there's no subquery, no parentheses. That's because the VALUES clause is actually a standalone statement too:
INSERT INTO ... VALUES ...
observe:
regress=> VALUES (1,2), (2,3);
column1 | column2
---------+---------
1 | 2
2 | 3
(2 rows)

Related

"WITH AS" Working in Postgres but not in H2 dabatabse

I am writing a single query to insert data into 2 tables using "WITH AS". The query works fine on Postgres but on H2 database it is throwing syntax error.
I have 2 tables.
Table 1 has 2 columns -- a Primary Key table1_ID and a table1_value column.
Table 2 has 3 columns -- a PK table2_Id and table2_value and table1_id as Foreign key.
The query is like this:
WITH ins as (
INSERT INTO table_1 (table1_value) VALUES ("table1_value")
RETURNING table1_ID as t1_id
)
INSERT INTO table_2 (table2_value, tab1_id) VALUES ("table2_value", (SELECT t1_id FROM ins));
This query works fine on Postgres but on H2 DB it throws syntax error with a message
"; expected "(, WITH, SELECT, FROM"; SQL statement
hadatabase reference link:
http://www.h2database.com/html/advanced.html#recursive_queries
http://www.h2database.com/html/commands.html?highlight=insert&search=insert#firstFound
see Compatibility section: https://www.postgresql.org/docs/current/sql-insert.html
INSERT conforms to the SQL standard, except that the RETURNING clause
is a PostgreSQL extension, as is the ability to use WITH with INSERT,
and the ability to specify an alternative action with ON CONFLICT.
Also, the case in which a column name list is omitted, but not all the
columns are filled from the VALUES clause or query, is disallowed by
the standard.

Is there a way to create a temporary table in SQL that deletes right after the query finishes? [duplicate]

This question already has answers here:
Creating temporary tables in SQL
(2 answers)
Closed 6 years ago.
I have a complicated query I'm working on. It involves several tables.
It would be very helpful for me to create a new table and then simply query from that. However, this is a shared database and I don't want to make a new table, especially when i don't plan on using that table specifically. (I just want it as a stepping stone in my query)
Is it possible to create a table just for 1 query that deletes right when the query is done? (i.e a temporary table)
Sure. Use CREATE TEMPORARY TABLE:
=> CREATE TEMPORARY TABLE secret_table(id BIGSERIAL, name text);
=> INSERT INTO secret_table(name) VALUES ('Good day');
INSERT 0 1
=> INSERT INTO secret_table(name) VALUES ('Good night');
INSERT 0 1
=> SELECT * FROM secret_table;
id | name
----+------------
1 | Good day
2 | Good night
(2 rows)
But upon reconnection:
psql (9.5.4)
Type "help" for help.
=> SELECT * FROM secret_table;
ERROR: relation "secret_table" does not exist
LINE 1: SELECT * FROM secret_table;
You could use temporary tables which drops itself at the end of session in which they were created (not after the query finishes, as you've said). Though, you could always drop it manually at the end of your operation.
If you'd like to create such table as a result from a query then this is the sample to be expanded to your needs:
CREATE TEMP TABLE tmp_table_name AS ( SELECT 1 AS col1 );
But I'm thinking you may be looking for a CTE instead of a table since you're saying that you're planning to use it only once. Consider this:
WITH tmp_table AS ( SELECT 1 AS col1 )
SELECT *
FROM tmp_table
...
You can also do dinamically The result of a query is also a Table
select * from (select col1, col2, col3
from my_complex_table
... ) t1
use keyword temporary, the temporary table is only visible in your current connection and drop after you disconnect your connection.
The other way would create a table and drop the table by yourself when you don't need it

How to add more rows to an existing DB Table

I'm currently updating an existing DB table.
The Table has 14924 rows, I'm trying to insert new data which is requiring 15000 rows.
When running my Query, I'm getting this error message:
There are fewer columns in the INSERT statement than values specified
in the VALUES clause. The number of values in the VALUES clause must
match the number of columns specified in the INSERT statement.
Is there a way to add the additional 76 rows as needed?
I'm using MSSMS (Microsoft SQL Server Management Studio)
Query I'm running:
Insert INTO [survey].[dbo].[uid_table] (UID)
VALUES ('F32975648JX2','F32975681JX2',..+14998 more)
Should I clear the Column first by setting to NULL
What I'm trying to do is add all the VALUES to the UID column
My Columns are currently set as is:
UID | Email | Name | Title | Company | Address1 | Address2 | DateCreated |
All columns I have set to NULL except for UID, which already contains Values like above. Just need to replace the old values with the new ones. BUt getting error stated above
For inserting more than one value into a column you need to make the Insert statement in this format
Insert INTO [survey].[dbo].[uid_table] (UID)
VALUES ('F32975648JX2'),
('F32975681JX2'),
..+14998 more)
Also note that, The maximum number of rows that can be constructed by inserting rows directly in the VALUES list is 1000. So you have to break the INSERT statement into 1000 rows per INSERT
To insert more than 1000 rows, use one of the following methods
Create multiple INSERT statements
Use a derived table
Bulk import the data by using the bcp utility or the BULK INSERT
statement
Derived table approach
Insert INTO [survey].[dbo].[uid_table] (UID)
select 'F32975648JX2'
Union All
Select 'F32975681JX2',
Union All
..+14998 more)
your problem is in your INSERT statment
An example is
INSERT INTO table (col1, col2, col3,...)
VALUES(valCol1, valcol2, valcol3...)
Ensure that the number of columns (col1, col2, col3...) is the same number that VALUES (valCol1, valcol2, valcol3...) 3 columns and 3 values in this case

SQL Insert into 2 tables, passing the new PK from one table as the FK in the other

How can I achieve an insert query on 2 tables that will insert the primary key set from one table as a foreign key into the second table.
Here's a quick example of what I'm trying to do, but I'd like this to be one query, perhaps a join.
INSERT INTO Table1 (col1, col2) VALUES ( val1, val2 )
INSERT INTO Table2 (foreign_key_column) VALUES (parimary_key_from_table1_insert)
I'd like this to be one join query.
I've made some attempts but I can't get this to work correctly.
This is not possible to do with a single query.
The record in the PK table needs to be inserted before the new PK is known and can be used in the FK table, so at least two queries are required (though normally 3, as you need to retrieve the new PK value for use).
The exact syntax depends on the database being used, which you have not specified.
If you need this set of inserts to be atomic, use transactions.
Despite what others have answered, this absolutely is possible, although it takes 2 queries made consecutively with the same connection (to maintain the session state).
Here's the mysql solution (with executable test code below):
INSERT INTO Table1 (col1, col2) VALUES ( val1, val2 );
INSERT INTO Table2 (foreign_key_column) VALUES (LAST_INSERT_ID());
Note: These should be executed using a single connection.
Here's the test code:
create table tab1 (id int auto_increment primary key, note text);
create table tab2 (id int auto_increment primary key, tab2_id int references tab1, note text);
insert into tab1 values (null, 'row 1');
insert into tab2 values (null, LAST_INSERT_ID(), 'row 1');
select * from tab1;
select * from tab2;
mysql> select * from tab1;
+----+-------+
| id | note |
+----+-------+
| 1 | row 1 |
+----+-------+
1 row in set (0.00 sec)
mysql> select * from tab2;
+----+---------+-------+
| id | tab2_id | note |
+----+---------+-------+
| 1 | 1 | row 1 |
+----+---------+-------+
1 row in set (0.00 sec)
From your example, if the tuple (col1, col2) can be considered unique, then you could do:
INSERT INTO table1 (col1, col2) VALUES (val1, val2);
INSERT INTO table2 (foreign_key_column) VALUES (SELECT id FROM Table1 WHERE col1 = val1 AND col2 = val2);
There may be a few ways to accomplish this. Probably the most straight forward is to use a stored procedure that accepts as input all the values you need for both tables, then inserts to the first, retrieves the PK, and inserts to the second.
If your DB supports it, you can also tell the first INSERT to return a value:
INSERT INTO table1 ... RETURNING primary_key;
This at least saves the SELECT step that would otherwise be necessary. If you go with a stored procedure approach, you'll probably want to incorporate this into that stored procedure.
It could also possibly be done with a combination of views and triggers--if supported by your DB. This is probably far messier than it's worth, though. I believe this could be done in PostgreSQL, but I'd still advise against it. You'll need a view that contains all of the columns represented by both table1 and table2, then you need an ON INSERT DO INSTEAD trigger with three parts--the first part inserts to the new table, the second part retrieves the PK from the first table and updates the NEW result, and the third inserts to the second table. (Note: This view doesn't even have to reference the two literal tables, and would never be used for queries--it only has to contain columns with names/data types that match the real tables)
Of course all of these methods are just complicated ways of getting around the fact that you can't really do what you want with a single command.

mysql: is there a way to do a "INSERT INTO" 2 tables?

I have one table with 2 columns that i essentially want to split into 2 tables:
table A columns: user_id, col1, col2
New tables:
B: user_id, col1
C: user_id, col2
I want to do:
INSERT INTO B (user_id, col1) SELECT user_id,col1 from A;
INSERT INTO C (user_id,col2) SELECT user_id, col2 from A;
But i want to do it in one statement. The table is big, so i just want to do it in one pass. Is there a way to do this?
Thx.
No, you can't insert into more than one table at the same time. INSERT syntax allows only a single table name.
http://dev.mysql.com/doc/refman/5.5/en/insert.html
INSERT [LOW_PRIORITY | DELAYED |
HIGH_PRIORITY] [IGNORE] [INTO]
tbl_name [...
Write a stored procedure to encapsulate the two inserts and protect the transaction.
If by "in one statement", you mean "atomically" - so that it can never happen that it's inserted into one table but not the other - then transactions are what you're looking for:
START TRANSACTION;
INSERT INTO B (user_id, col1) SELECT user_id,col1 from A;
INSERT INTO C (user_id,col2) SELECT user_id, col2 from A;
COMMIT;
If you need to actually do this in a single statement, you could create these as a stored procedure and call that, as #lexu suggests.
See the manual for reference: http://dev.mysql.com/doc/refman/5.0/en/commit.html
Caveat: this will not work with MyISAM tables (no transaction support), they need to be InnoDB.
Unless your tables are spread over multiple physical disks, then the speed of the select/insert is likely to be IO bound.
Trying to insert into two tables at once (even if it were possible) is likely to increase the total insert time as the disk will have to thrash more writing to your tables.