sync data between two postgres database tables - sql

I have a db1 table A, and db2 table B
i want to insert only non-existing rows from table A to table B and if data already exist in table B, update it.
what is the best way to perform this? i have hundreds of rows to insert and update and many tables. i’m using dbvisualizer.
Thanks.

One method uses a not exists subquery. Something like this:
insert into b (col1, . . . )
select col1, . . .
from a
where not exists (select 1 from b where b.? = a.?);
There are other methods. If you have a unique constraint/index on b that defines uniqueness, then you might want an on conflict clause instead. If you are trying to prevent duplicates, then a unique constraint/index is the correct solution.

Related

I want to delete the records from a parent table using child table in postgresql

I have a table (A) having 200,000 records. I have created a backup table (B) from the same table (A) which contains around 100,000 records. Now I have to delete all the records which are present in table B from the parent table A.
I am using the below query:
delete from A where id in (select id from B);
The query which I am using is taking a lot of time, the delete is happening very slowly. Could someone please help me in reducing the time taken while deleting the records??
Any help will be appreciated.
How about creating a table with the records you want to keep, dropping the old A table, and renaming the new one back to A?
A query like:
Create table C
Select A.*
From A Left Outer Join B ON A.id=B.id
Where B.id is null
should perform well if id is indexed.

Snowflake Append two tables together

i have data in a table and some new one in a s3 stage .
i want to append the stage data to the existing one .
what is the best way to do that in snowflake ?
Also it's in a script and i don't know what columns are in the tables but i know that they are identical , and none will match.
i tried to do some merge but i need to specify columns
i guess i can do a create table as (select both tables) but it makes me create a 3rd table...
Insert into , i don't have the columns name .
The result would be just the two tables together .
basically i want to apply a concat but to tables .
Thank you =)
Is this what you're looking for?
SELECT * from table_a
union
SELECT * from table_b

INSERT based on another table's row

I need to INSERT a row in table_A depending on the information in a row in table_B.
Is it possible to do this in an isolated way where the SELECT retrieval of the row from table B is locked until either the new row is INSERTed into table_A or the INSERT is skipped due to the information in table_B's row?
It's really not clear what you are trying to say , i think your problem is solved by using a trigger .
check this site for know more about trigger
http://www.codeproject.com/Articles/25600/Triggers-SQL-Server
You can do this:
INSERT INTO A (columns) select columns from table B where condition;
Columns retrieved from the query must match the queries defined in the table A.
PostgreSQL supports MVCC, custom locking can be done but it is not recomended.

delete old values of a table and update the table with results of same query

My question is to simple, but I can't find out a way to delete old values of a table and update same table with results of same query.
UPDATE
The query is an SELECT on Table A, and the results be Table B. And nothing on Table B different of the result of last query on Table A.
I have a very big table, and I need to process the records and create a new table regularly. The old values of this table are not important, only the new ones.
I will appreciate any help.
What about a view? If you only need table B to query on. You said you have a select on table A. Lets say your select is SELECT * FROM TableA WHERE X = Y. Then your statement would be
CREATE VIEW vwTableB AS
SELECT * FROM TableA WHERE X = Y
And then instead of querying tableB you would query vwTableB. Any changes to the data in table A would be reflected in the view so you don't have to keep running a script yourself.
This was the data in vwTableB would be kept updated and you wouldn't have to keep deleting and inserting into the second table.
you can use a temporary table to store results you are working with, if you only need it for one session. it will automatically be dropped when you sign out.
you didn't say what db you are using, but try this
create temp tableB AS select * from tableA

Query select a bulk of IDs from a table - SQL

I have a table which holds ~1M rows. My application has a list of ~100K IDs which belong to that table (the list being generated by the application layer).
Is there a common-method of how to query all of these IDs? ~100K Select queries? A temporary table which I insert the ~100K IDs to, and Select query via join the required table?
Thanks,
Doori Bar
You could do it in one query, something like
SELECT * FROM large_table WHERE id IN (...)
Insert a comma-separated list of IDs where I put the ...
Unfortunately, there is no easy way that I know of to parametrize this, so you need to be extra-super careful to avoid SQL injection vulnerabilities.
A temporary table which holds the 100k IDs seems like a good solution. Don't insert them one by one though ; INSERT ... VALUES syntax in MySQL accepts the insertion of multiple rows.
By the way, where do you get your 100k IDs, if it's not from the database ? If they come from a preceding request, I'd suggest to have it fill the temporary table.
Edit : For a more portable way of multiple insert :
INSERT INTO mytable (col1, col2) SELECT 'foo', 0 UNION SELECT 'bar', 1
Do those id's actually reference the table with 1M rows?
If so, you could use SELECT * ids FROM <1M table>
where ids is the ID column and where "1M table" is the name of the table which holds the 1M rows.
but I don't think I really understand your question...