How to rebuild this query using joins - sql

help me please to understand, how to remake this query using joins.
INSERT INTO mytable
(user_id, tag)
SELECT ?, ?
WHERE NOT EXISTS (SELECT user_id FROM mytable WHERE user_id=? AND tag=?)
I saw similar questions that use two different tables, but here i have one table.
I need to insert only if the same entry does not exist. mytable table does not have any UNIQUE constraints and i can't change scheme.

You can put your parameters into a common table expression that is reusable:
with data (user_id, tag) as (
values (?,?)
)
INSERT INTO mytable (user_id, tag)
SELECT d.user_id, d.tag
FROM data d
WHERE NOT EXISTS (SELECT *
FROM mytable t
cross join data d
WHERE t.user_id = d.user_id
AND t.tag = d.tag)
Note that this will not prevent concurrent insert of the same values. The only way to achieve that is to add a unique constraint (and then you can use on conflict do nothing)

Related

Dynamically storing values in SQL table

I'm trying to dynamically create a variable or table in SQL which will store distinct values as a result for another sql query.
declare sample_table table
( values varchar(100))
insert into #sample_table values (select t1.value from my_tablw as t1 group by t1.value);
Supposing the distinct values in column value can change from table query to another table query, I want to store the result of this query in a user defined variable/table which can be used later in another query.
Depending on your definition of can be used later you can use a local temp table or table variable.... you just need to change the syntax a bit to not use the values since you are inserting from the results of a query. I also used DISTINCT below which is clearer than the GROUP BY without an aggregate function.
declare sample_table table ([values] varchar(100))
insert into #sample_table
select distinct t1.value
from my_tablw as t1
--one way to use it
select *
from newTable
where columnVal in (select * from #sample_table)
--another way to use it
select at.*
from anotherTable at
inner join #sample_table t on
t.column = at.column
--and another way...
select f.*
from finalTable f
where exists (select * from #sample_table t where t.column = f.column)
If you need this to be used outside the scope of your current batch, you'll need to use a persisted table or global temporary table.

Insert new/Changes from one table to another in Oracle SQL

I have two tables with same number of columns :-Table A and Table B
Every day I insert data from Table B to Table A. now the insert query is working
insert into table_a (select * from table_b);
But by this insert the same data which was inserted earlier that is also getting inserted. I only want those rows which are new or are changed from the old data. How can this be done ?
You can use minus:
insert into table_a
select *
from table_b
minus
select *
from table_a;
This assumes that by "duplicate" you mean that all the columns are duplicated.
If you have a timestamp field, you could use it to limit the records to those created after the last copy.
Another option is, assuming that you have an primary key (id column in my example) that you can use to know whether a record has already been copied, you can create a table c (with the same structure as a and b) and do the following:
insert into table c
select a.* from table a
left join table b on (a.id=b.id)
where b.id is null;
insert into table b select * from table c;
truncate table c;
You need to adjust this query in order to use the actual primary key.
Hope this helps!
If the tables have a primary or unique key, then you could leverage that in an anti-join:
insert into table_a
select *
from table_b b
where not exists (
select null
from table_a a
where
a.pk_field_1 = b.pk_field_1 and
a.pk_field_2 = b.pk_field_2
)
You don't say what your key is. Assuming you have a key ID, that is you only want ID's that are not already in Table A. You can also use Merge-Statement for this:
MERGE INTO A USING B ON (A.ID = B.ID)
WHEN NOT MATCHED THEN INSERT (... columns of A) VALUES (... columns of B)

Comparing records in duplicate tables in different oracle databases

I have the same table in two oracle databases. One is a staging database that has it's record loaded into the primary. I want to go through the staging table and see if there are no changes to the primary record. If there aren't then delete it. How can I do this?
To get the ones where something has changed you can use:
SELECT * FROM "staging_table"
MINUS
SELECT * FROM "table";
So, assuming that the table has a primary key then you can do
DELETE FROM "table"
WHERE primary_key_column
NOT IN
( SELECT primary_key_column
FROM (
SELECT * FROM "staging_table"
MINUS
SELECT * FROM "table"
)
);
Something like this should work
delete from stagingtable
where id in
(select id
from stagingtable st join productiontable pt on st.id = pt.id
and st.nextfield = pt.nextfield
etc
)

MERGE - When not matched INSERT's exception

i have a PL/SQL procedure using MERGE :
MERGE INTO
table_dest d
USING
(SELECT * FROM my_Table) s
ON
(s.id = d.id)
when matched then UPDATE set d.col1 = s.col1
when not matched then INSERT (id, col1) values (s.id, s.col1);
now lets say the query s returns mutiple rows with same id wich will returns an ORA-00001: unique constrain error
what i want to do is to send the duplicated columns to another table my_Table_recyledbin to get a successful INSERT, can i use EXCEPTION WHEN DUP_VAL_ON_INDEX ? if yes how to use it with the MERGE statement?
Thanks in advance
Why not handle the archiving of duplicate rows to the recycle bin table in a separate statement?
Firstly, do your merge (aggregating the duplicate rows to avoid the unique constraint error). I've assumed a MAX aggregate function on col1, but you can use whatever suits your needs -- you have not specified how to decide which row to use when there are duplicates.
MERGE INTO
table_dest d
USING
(SELECT a.id, MAX(a.col1) as col1
FROM my_Table a
GROUP BY a.id) s
ON
(s.id = d.id)
WHEN MATCHED THEN UPDATE SET d.col1 = s.col1
WHEN NOT MATCHED THEN INSERT (id, col1) VALUES (s.id, s.col1);
Then, deal with the duplicate rows. I'm assuming that your recycle bin table does allow duplicate ids to be inserted:
INSERT INTO my_Table_recyledbin r (id, col1)
SELECT s.id, s.col1
FROM my_Table s
WHERE EXISTS (SELECT 1
FROM my_Table t
WHERE t.id = s.id
AND t.ROWID != s.ROWID)
Hopefully, that should fulfil your needs.
Can't you just use an error-logging clause? I.E., add this line at the end of your MERGE statement:
LOG ERRORS INTO my_Table_recycledbin

Compare unique values from two mysql tables

I have two mysql tables: TableA has 10,000 records TableB has 2,000 records.
I want to copy the 8,000 unique records from TableA into TableB ignoring the 2,000 in TableB which have already been copied.
If uniqueness is determined by PRIMARY KEY constraint or UNIQUE constraint, then you can use INSERT IGNORE:
INSERT IGNORE INTO TableB SELECT * FROM TableA;
The rows that are duplicates and that conflict with rows already in TableB will be silently skipped and the other 8,000 rows should be inserted.
See the docs on INSERT for more details.
If you need to do this in PHP, read about the array_diff_key() function. Store your arrays with the primary key values as the key of the array elements. No guarantees for the performance of this PHP function on such large arrays, though!
Use the INSERT INTO syntax:
INSERT INTO TABLE_B
SELECT *
FROM TABLE_A a
WHERE NOT EXISTS(SELECT NULL
FROM TABLE_B b
WHERE b.column = a.column)
You'll need to update the WHERE b.column = a.column) to satisfy however you determine that a record already exists in TABLE_B.
What about something like this :
insert into TableB
select *
from Table A
where not exists (
select 1
from TableB
where TableB.id = TableA.id
)
Or, if the entries in table B are "not unique" because of their primary key, an insert ignore might do the trick, I suppose.