Upsert query is not updating the the records in postgres - sql

I am trying to Upsert into the target table from temp table but
My upsert query is not updating the records in postgresql. Although it is inserting but not updating.
Please find below query:
Insert into store
Select t.* from store_temp
where t.id not in (select a. Id
from store a) on conflict(id)
DO
Update
Set source = EXCLUDED.Source
Any helping hand would be really appreciated.

Your syntax looks correct, but I don't think you want the where clause. Instead:
Insert into store ( . . . )
select . . .
from store_temp t
on conflict (id) do update
set source = EXCLUDED.Source;
The . . . are for the column list. I recommend being explicit in inserts.
Then you need to be sure that id is declared as the primary key or at least has a unique constraint or index.

Related

Getting Newly Created Range of ID's When Inserting to SQL Server

I'm currently trying to insert to a database table and then update another table with the ID's from all the newly created rows.
The problem is that using Scope_Identity() works as expected and just returns the last created identity.
What I need however is for each row created, to grab the ID and insert that during the update.
Is this possible without using a cursor and having to insert and update one row at a time?
Use the output clause. Something like this:
declare #ids table (int id);
insert into t ( . . . )
output inserted.id into #ids
select . . .
from . . ;
This puts all the ids into the table variable (which can be a regular table). You then have access to all of them.

How can I delete from one table where rows exist in another table in DB2?

What I'm trying to achieve is very straightforward but for some reason, I can't get it to work-- partly because I can't find any help specific to DB2.
I want to delete all records from a table if they exist in another table (these are both very large tables with 1m+ records). Here's what I've tried:
DELETE FROM dave.Last
WHERE NULBERS IN
(Select Substr(MSISDN,4) from a1313495.COMBINED_NUMBERS);
dave.Last is the table I'm trying to selectively delete from. I don't know if that works but it ran for a very long time, I had to cancel it.
I've also tried
DELETE t1.NUMBERS
FROM dave.Last t1
LEFT JOIN a1313495.COMBINED_NUMBERS t2 ON Substr(t2.MSISDN,4) = t1.NUMBERS
WHERE Substr(t2.MSISDN,4) = t1.NUMBERS
This does not work either as it returns SQL Error: DB2 SQL Error: SQLCODE=-104. Surprisingly, the above query works when I change "DELETE" to "SELECT".
How can I achieve this? I need to use the most optimal method too because of the size of the tables.
Hmmm . . . I would suggest an expression-based index:
create index idx_combined_numbers_msisdn_4
on a1313495.COMBINED_NUMBERS(substr(MSISDN, 4));
Then use EXISTS:
DELETE FROM dave.Last l
WHERE EXISTS (SELECT 1
FROM a1313495.COMBINED_NUMBERS cn
WHERE l.NUMBERS = SUBSTR(MSISDN,4)
);

Tricky problem with primary key in INSERT statement

I have table in a SQL Server database with autoincrementing primary key [ID]. Is there some way to include the [ID] column in the INSERT statement so that the database would ignore it? Some trick with table configuration?
I am not working on PC (on Omron NJ PLC), so I can't write statements myself. Instead they are mapped from Structs. And, if it is possible, I want to use same Structs for both INSERT and SELECT (where I need [ID] for a later UPDATE). Also I have no desire of generating index myself. Although it would be lesser evil.
In SQL Server, you need to provide a value for columns that are inserted. There is a special value called DEFAULT that inserts the default value. However, it cannot be used with IDENTITY columns.
The normal insert method is to simply leave out the column:
insert into t (<all columns but id>)
values (<all values for other columns>);
Even a trigger on the tables doesn't get around this limitation, but there is a trick you can use:
Create a view on the table selecting all columns.
Create an instead of insert trigger on the table.
Insert into the view instead of the table.
This looks like:
create view v_t as
select * from t;
create trigger trig_v on v_t instead of insert as
begin
insert into t ( . . . ) -- all columns except id
select . . . -- all columns except id
from inserted;
end;
insert into v_t -- I recommend listing the columns but not required
values (NULL, . . . );
Here is a db<>fiddle.
If you want to insert explicit value for the ID on an identity, you can use the SET INSERT_IDENTITY ON statement before your insert. After that you will enable again the identity by the statement SET INSERT_IDENTITY OFF.
Hope this helps.

Select All From Table Except Specific Column/s Without Temporary Table

I've been searching for a way to get all data from a certain table except for a certain column.
Is there a way to do this without creating a temporary table? I find this although creative, inefficient.
I found this but again it's creating a temporary table. I guess temporary table is okay as long as it not an actual table that I can access.
SELECT * INTO #TempTable
FROM TABLE_NAME
ALTER TABLE #TempTable
DROP COLUMN COLUMN_NAME
SELECT * FROM #TempTable
DROP TABLE TempTable;
Again, my goal is to avoid creating a temporary table where I would later delete it to make it seem 'data-like'. Sorry I'm not quite sure how to put it into words.
Just select the columns you do want . . .
select . . .
from t;
You can create a view with the columns you want.

Duplicate key error with PostgreSQL INSERT with subquery

There are some similar questions on StackOverflow, but they don't seem to exactly match my case. I am trying to bulk insert into a PostgreSQL table with composite unique constraints. I created a temporary table (temptable) without any constraints, and loaded the data (with possible some duplicate values) in it. So far, so good.
Now, I am trying to transfer the data to the actual table (realtable) with unique index. For this, I used an INSERT statement with a subquery:
INSERT INTO realtable
SELECT * FROM temptable WHERE NOT EXISTS (
SELECT 1 FROM realtable WHERE temptable.added_date = realtable.added_date
AND temptable.product_name = realtable.product_name
);
However, I am getting duplicate key errors:
ERROR: duplicate key value violates unique constraint "realtable_added_date_product_name_key"
SQL state: 23505
Detail: Key (added_date, product_name)=(20000103, TEST) already exists.
My question is, shouldn't the WHERE NOT EXISTS clause prevent this from happening? How can I fix it?
The NOT EXISTS clause only prevents rows from temptable conflicting with existing rows from realtable; it will not prevent multiple rows from temptable from conflicting with each other. This is because the SELECT is calculated once based on the initial state of realtable, not re-calculated after each row is inserted.
One solution would be to use a GROUP BY or DISTINCT ON in the SELECT query, to omit duplicates, e.g.
INSERT INTO realtable
SELECT DISTINCT ON (added_date, product_name) *
FROM temptable WHERE NOT EXISTS (
SELECT 1 FROM realtable WHERE temptable.added_date = realtable.added_date
AND temptable.product_name = realtable.product_name
)
ORDER BY ???; -- this ORDER BY will determine which of a set of duplicates is kept by the DISTINCT ON