Merge inserted values against temp table leaving out null values - sql

In my process I need to create an INSTEAD OF INSERT trigger which will accept the new values for the record and create a new version of it, example:
Note: Table columns are constantly changing due to business requirements. So if there is a solution that would support table to table merge instead of column to column that would be awesome.
ExampleTable
VersionID |ID |Value 1 |Value 2
1 |1 |abc | 123
Example query
INSERT INTO ExampleTable (ID,[Value 1]) VALUES (1,'testabc')
Resulting table:
VersionID |ID |Value 1 |Value 2
1 |1 |abc | 123
2 |1 |testabc | 123
At this moment I have something like this:
-- Get data
SELECT TOP 1 * INTO #ExistingData FROM dbo.ExampleTableLatestVersionView
WHERE ID = #ID
-- Merge incoming data
MERGE #ExistingData AS target
USING inserted as source
ON (target.ID= source.ID)
WHEN MATCHED
THEN UPDATE SET target.[Value 1] = source.[Value 1],
target.[Value 2] = source.[Value 2];
-- And afterwards I do a new insert into version table
Problem here is that NULL values from inserted table are overwriting and I end up with this:
VersionID |ID |Value 1 |Value 2
1 |1 |abc | 123
2 |1 |testabc | NULL
I was thinking of doing INSTEAD OF UPDATE where I could get previous values by referencing VersionID, but I want to know if this is possible.

This will use the existing value if provided value is null:
MERGE #ExistingData AS target
USING inserted as source
ON (target.ID= source.ID)
WHEN MATCHED
THEN UPDATE SET target.[Value 1] = ISNULL( source.[Value 1],target.[Value 1]),
target.[Value 2] = ISNULL( source.[Value 2],target.[Value 2]);

Related

PostgreSQL add new not null column and fill with ids from insert statement

I´ve got 2 tables.
CREATE TABLE content (
id bigserial NOT NULL,
name text
);
CREATE TABLE data (
id bigserial NOT NULL,
...
);
The tables are already filled with a lot of data.
Now I want to add a new column content_id (NOT NULL) to the data table.
It should be a foreign key to the content table.
Is it possible to automatically create an entry in the content table to set a content_id in the data table.
For example
**content**
| id | name |
| 1 | abc |
| 2 | cde |
data
| id |... |
| 1 |... |
| 2 |... |
| 3 |... |
Now I need an update statement that creates 3 (in this example) content entries and add the ids to the data table to get this result:
content
| id | name |
| 1 | abc |
| 2 | cde |
| 3 | ... |
| 4 | ... |
| 5 | ... |
data
| id |... | content_id |
| 1 |... | 3 |
| 2 |... | 4 |
| 3 |... | 5 |
demo:db<>fiddle
According to the answers presented here: How can I add a column that doesn't allow nulls in a Postgresql database?, there are several ways of adding a new NOT NULL column and fill this directly.
Basicly there are 3 steps. Choose the best fitting (with or without transaction, setting a default value first and remove after, leave the NOT NULL contraint first and add afterwards, ...)
Step 1: Adding new column (without NOT NULL constraint, because the values of the new column values are not available at this point)
ALTER TABLE data ADD COLUMN content_id integer;
Step 2: Inserting the data into both tables in a row:
WITH inserted AS ( -- 1
INSERT INTO content
SELECT
generate_series(
(SELECT MAX(id) + 1 FROM content),
(SELECT MAX(id) FROM content) + (SELECT COUNT(*) FROM data)
),
'dummy text'
RETURNING id
), matched AS ( -- 2
SELECT
d.id AS data_id,
i.id AS content_id
FROM (
SELECT
id,
row_number() OVER ()
FROM data
) d
JOIN (
SELECT
id,
row_number() OVER ()
FROM inserted
) i ON i.row_number = d.row_number
) -- 3
UPDATE data d
SET content_id = s.content_id
FROM (
SELECT * FROM matched
) s
WHERE d.id = s.data_id;
Executing several statements one after another by using the results of the previous one can be achieved using WITH clauses (CTEs):
Insert data into content table: This generates an integer series starting at the MAX() + 1 value of the current content's id values and has as many records as the data table. Afterwards the new ids are returned
Now we need to match the current records of the data table with the new ids. So for both sides, we use row_number() window function to generate a consecutive row count for each records. Because both, the insert result and the actual data table have the same number of records, this can be used as join criterion. So we can match the id column of the data table with the new content's id values
This matched data can used in the final update of the new content_id column
Step 3: Add the NOT NULL constraint
ALTER TABLE data ALTER COLUMN content_id SET NOT NULL;

BigQuery: Concatenate two arrays and keep distinct values within MERGE statement

I am working on a MERGE process and update an array field with new data but only if the value isn't already found in the array.
target table
+-----+----------+
| id | arr_col |
+-----+----------+
| a | [1,2,3] |
| b | [0] |
+-----+----------+
source table
+-----+----------+
| id | arr_col |
+-----+----------+
| a | [3,4,5] |
| b | [0,0] |
+-----+----------+
target table post-merge
+-----+-------------+
| id | arr_col |
+-----+-------------+
| a | [1,2,3,4,5] |
| b | [0] |
+-----+-------------+
I was trying to use SQL on this answer in my MERGE statement
merge into target t
using source
on target.id = source.id
when matched then
update set target.arr_col = array(
select distinct x
from unnest(array_concat(target.arr_col, source.arr_col)) x
)
but BigQuery shows me the following error:
Correlated Subquery is unsupported in UPDATE clause.
Is there any other way to update this array field via MERGE? The target and source tables can be quite large and would run daily. So it's a process I would like to have incremental updates for as opposed to recreating entire table with new data every time.
Below is for BigQuery Standard SQL
merge into target
using (
select id,
array(
select distinct x
from unnest(source.arr_col || target.arr_col) as x
order by x
) as arr_col
from source
join target
using(id)
) source
on target.id = source.id
when matched then
update set target.arr_col = source.arr_col;
Wanted to expand on Mikhail Berlyant's answer because my actual application differed a little bit from OP as I also needed to data to be inserted if merge conditions were not met.
merge into target
using (
select id,
array(
select distinct x
from unnest(
/*
concat didn't work without case-when statement for
new data (i.e. target.id is null)
*/
case when target.id is not null then source.arr_col || target.arr_col
else source.arr_col
end
) as x
order by x
) as arr_col
from source
left join target /* to be able to account for brand new data in source */
using(id)
) source
on target.id = source.id
when matched then
update set target.arr_col = source.arr_col
when not matched insert row
;

update a table from another table and add new values

How would I go about updating a table by using another table so it puts in the new data and if it doesnt match on an id it adds the new id and the data with it. My original table i much bigger than the new table that will update it. and the new table has a few ids that aren't in the old table but need to be added.
for example I have:
Table being updated-
+-------------------+
| Original Table |
+-------------------+
| ID | Initials |
|------+------------|
| 1 | ABC |
| 2 | DEF |
| 3 | GHI |
and...
the table I'm pulling data from to update the other table-
+-------------------+
| New Table |
+-------------------+
| ID | Initials |
|------+------------|
| 1 | XZY |
| 2 | QRS |
| 3 | GHI |
| 4 | ABC |
then I want my Original table to get its values that match up to be updated by the new table if they have changed, and add any new ID rows if they aren't in the original table so in this example it would look like the New Table.
+-------------------+
| Original Table |
+-------------------+
| ID | Initials |
|------+------------|
| 1 | XZY |
| 2 | QRS |
| 3 | GHI |
| 4 | ABC |
You can use MERGE statement to put this UPSERT operation in one statement but there are issues with merge statement I would split it into two Statements, UPDATE and INSERT
UPDATE
UPDATE O
SET O.Initials = N.Initials
FROM Original_Table O INNER JOIN New_Table N
ON O.ID = N.ID
INSERT
INSERT INTO Original_Table (ID , Initials)
SELECT ID , Initials
FROM New_Table
WHERE NOT EXISTS ( SELECT 1
FROM Original_Table
WHERE ID = Original_Table.ID)
Important Note
Reason why I suggested to avoid using merge statement read this article Use Caution with SQL Server's MERGE Statement by Aaron Bertrand
You need to use the MERGE statement for this:
MERGE original_table AS Target
USING updated_table as Source
ON original_table.id = updated_table.id
WHEN MATCHED THEN UPDATE SET Target.Initials = Source.Initials
WHEN NOT MATCHED THEN INSERT(id, Initials) VALUES(Source.id, Source.Initials);
You have not specified, what happens in case the valuesin original table are not found in the updated one. But, just in case, you can add this to remove them from original table:
WHEN NOT MATCHED BY SOURCE
THEN DELETE
if you can use loop in PHP and go through all tables and copy one by one to another table.
another option
DECLARE #COUT INT
SET #COUT = SELECT COUNT(*) FROM New_Table
WHILE (true)
BEGIN
IF #COUT = 0
BREAK;
SET #COUT = #COUT - 1
DECLARE #id INT
DECLARE #ini VARCHAR(20)
SET #id = (SELECT id FROM New_Table);
SET #ini = (SELECT Initials FROM New_Table);
IF (SELECT COUNT(*) FROM Original_Table WHERE id=#id ) > 0
UPDATE SET ID = #id,Initials = #ini FROM Original_Table WHERE id = #id;
insert into Original_Table values(#id,#ini);
END
GO

Remove semi-duplicate rows from a result set

I have a logging table (TABLE_B) that is updated via triggers from a main table (TABLE_A). The trigger operates whenever any field on TABLE_A is inserted/updated. We need to pull a report out that shows only a subset of updates on TABLE_B - ie the user is only interested in the fields:
ID
STAGE
STATUS
UPDATE_DATE
I need to remove sequential duplicates from the result set, ie, suppose the following entries exist in TABLE_B:
+----+-----+------+-----------+
|ID |STAGE|STATUS|UPDATE_DATE|
+----+-----+------+-----------+
|4567|7 |9 |2012-12-25 |
+----+-----+------+-----------+
|4567|4 |2 |2012-12-24 |
+----+-----+------+-----------+
|4567|4 |3 |2012-12-23 |
+----+-----+------+-----------+
|4567|4 |2 |2012-12-22 |
+----+-----+------+-----------+
|4567|4 |2 |2012-12-21 |
+----+-----+------+-----------+
|4567|4 |3 |2012-12-20 |
+----+-----+------+-----------+
|4567|4 |2 |2012-12-19 |
+----+-----+------+-----------+
From the bottom, I need to extract rows 1,2,3,5,6,7 - omitting row 4 only: I have 2 entries at rows 3 & 4 that are duplicates (row 4 has been triggered into TABLE_B because of an update to some other field in TABLE_A, but it's stage/status combination hasn't altered therefore it can be ignored).
So, when I discover that the next row in a result set is a duplicate (and only the next row) of the current row, how can I either remove it from the result set, or neglect to select it in the first place. I'll be performing the operation using a stored proc - will a cursor be involved in this?
Sybase 12.5, though the syntax is very close to SQL Server.
Had a look at a similar question on StackOF:
http://stackoverflow.com/questions/19774273/remove-duplicates-in-sql-result-set-of-one-table
I think this answers the question:
select id, status, stage, min(updated_date)
from TABLE_B
where id = <someValue>
group by id, status, stage

INSERT if NOT EXISTS, But DELETE if it EXISTS

I have the following query to update a table record setting new foreignKey if that foreignKey and foreignKey2 did not already exist. This should work great, however, how can I modify to delete that particular pkID record if it DOES exist?
table structure:
+----------------+
| table |
+----------------+
| pkID |
| foreignKey |
| foreignKey2 |
+----------------+
query:
UPDATE table a
SET a.foreignKey = 2
WHERE a.pkID = 1234
AND NOT EXISTS (
SELECT 1
FROM table b
WHERE b.foreignKey = 2
AND b.foreignKey2 = a.foreignKey2
)
You can delete if it exists, and only insert (instead of update since the record doesn't exist to be deleted) otherwise. But it is not clear what the 3rd value should be.
DELETE tbl where pkID = 1234;
if ##ROWCOUNT = 0
INSERT tbl(foreignKey, pkID, foreignKey2)
VALUES (2, 1234, ??)
You need MERGE. Take a look here(there is an example with the same task)