I have something like this in my Temp_Table
Name Number Source Type
Jon NOT FOUND NOT FOUND email
I'm trying to insert a record from my Temp_Table into my Process_Table if the Source is NOT EXISTED in All_USER and Process_Table tables with "ADD".
Currently that example record is not existed in either All_User or Process_Table so it supposed to be inserting into Process_Table but it’s not for some reason so any help or suggestion would be really appreciated.
INSERT INTO Process_Table (Name, Number, Source, Type, Operation)
SELECT Name, Number, Source, Type, 'ADD' Operation,
FROM Temp_Table tt
WHERE
NOT EXISTS (SELECT 1 FROM All_User Au WHERE tt.Source = Au.Source)
AND
NOT EXISTS (SELECT 1 FROM Process_Table Pt WHERE tt.Source = Pt.Source AND Pt.Operation = 'ADD')
AND
tt.Source = 'NOT FOUND'
GO
Related
I am trying to write something like below.
The objective is not to have more than one record with the same description. We have the unique constraint on the description column.
I must write this insert query that should work (without throwing errors) even if it gets executed more than once accidentally. Column id is the primary key of the table
insert into test (id, description)
select max(id)+1, 'test record' from test
where not exists ( select 1 from test where description = 'test record' );
if there is already a record in test table with description = 'test record', then the result of the following query has null for the id, and the insert fails with primary key violation
select max(id)+1, 'test record' from test
where not exists ( select 1 from test where description = 'test record' );
if i have to alternately write sql block with variable and begin/end to accomplish this, i am good to do that
however, any advice is appreciated
Nest the select statement inside another query, like this:
insert into test (id, description)
select t.id, t.description
from (
select max(id)+1 as id, 'test record' as description
from test
where not exists (select 1 from test where description = 'test record' )
) t
where t.id is not null
See the demo.
The use of an aggregate function without a group by clause forces the query to generate a record even when the where clause eliminates all rows
A quick workaround would be add a (dummy) group by clause:
insert into test (id, description)
select max(id)+1, 'test record' from test
where not exists ( select 1 from test where description = 'test record' )
group by 2;
Or alternatively, you can move the aggregate function to a subquery. I find that this solution makes the intent clearer:
insert into test (id, description)
select t.id, 'test record'
from (select max(id) + 1 id from test) t
where not exists ( select 1 from test where description = 'test record');
Demo on DB Fiddle
I have a table table with columns id, name, position, capture_date, modified_date, comments.
I am trying to do a simple upsert which is driving me crazy.
When the table is empty, it has to insert, but when its not empty it has to update the comments column row which has the same position, if its different it has to insert a new row instead of updating the existing one.
When the table is empty, i used this merge statement to create the first row.
This works fine.
But, second row has to be
1, john, 2, 01-JUL-15, 23-JUL-15, 'world'
In this case, the data is almost same except that the position value is 2, so a new row has to be inserted instead of updating the existing row's position to 2.
That is what my merge statement is doing. Any ideas to work on this please.
merge into customers a
using(select 1 as customer_id, 'john' as customer_name, '1' as position, '01-JUL-15' as capture_date,
sysdate as modified_date, 'hello' as comments from dual) b
on(a.customer_id=b.customer_id)
when matched then
update set a.customer_id = b.customer_id, a.customer_name = b.customer_name,
a.position = b.position, a.capture_date= b.capture_date, a.modified_date = b.modified_date,
a.comments=b.comments
when not matched then
insert(a.customer_id, a.customer_name, a.position, a.capture_date, a.modified_date, a.comments)
values(b.customer_id, b.customer_name, b.position, b.capture_date, b.modified_date, b.comments)
I have created the sqlfiddle
So lessons learned:
1 post the original query not some faulty surrogate.
2 post any error message you get.
The error message you get is:
ORA-38104: Columns referenced in the ON Clause cannot be updated: "A"."CUSTOMER_ID"
Solution: remove a.customer_id from the update clause.
merge into customers a
using (select 1 as customer_id
,'john' as customer_name
,'1' as position
,'01-JUL-15' as capture_date
,sysdate as modified_date
,'hello' as comments
from dual) b
on (a.customer_id = b.customer_id)
when matched then
update
set a.customer_name = b.customer_name
,a.position = b.position
,a.capture_date = b.capture_date
,a.modified_date = b.modified_date
,a.comments = b.comments
when not matched then
insert
(a.customer_id
,a.customer_name
,a.position
,a.capture_date
,a.modified_date
,a.comments)
values
(b.customer_id
,b.customer_name
,b.position
,b.capture_date
,b.modified_date
,b.comments)
UPDATE: Using Update_Columns() is not an answer to this question, as the fields may change in the order which will break the trigger (Update_Columns depends on the column order).
UPATE 2: I already know that the Deleted and Inserted tables hold the data. The question is how to determine what has changed without having to hard code the field names as the field names may change, or fields may be added.
Lets say I have a table with three fields.
The row already exists, and now the user updates fields 1 and 2.
How do I determine, in the Update Trigger, what the field were updated, and what the before and after values where?
I want to then log these to a log table. If there were two fields update, it should result in two rows in the history table.
Table
Id intField1 charField2 dateField3
7 3 Fred 1995-03-05
Updated To
7 3 Freddy 1995-05-06
History Table
_____________
Id IdOfRowThatWasUpdated BeforeValue AfterValue (as string)
1 7 Fred Freddy
2 7 1995-03-05 1995-05-06
I know I can use the Deleted table to Get the old values, and the inserted table to get the new values. The question however, is how to do this dynamically. In other words, the actual table has 50 columns, and I don't want to hard code 50 fields into a SQL statement, and also if the fields change, and don't want to have to worry about keeping the SQL in sync with table changes.
Greg
you can use one of my favorite XML-tricks to do this:
create trigger utr_Table1_update on Table1
after update, insert, delete
as
begin
with cte_inserted as (
select id, (select t.* for xml raw('row'), type) as data
from inserted as t
), cte_deleted as (
select id, (select t.* for xml raw('row'), type) as data
from deleted as t
), cte_i as (
select
c.ID,
t.c.value('local-name(.)', 'nvarchar(128)') as Name,
t.c.value('.', 'nvarchar(max)') as Value
from cte_inserted as c
outer apply c.Data.nodes('row/#*') as t(c)
), cte_d as (
select
c.ID,
t.c.value('local-name(.)', 'nvarchar(128)') as Name,
t.c.value('.', 'nvarchar(max)') as Value
from cte_deleted as c
outer apply c.Data.nodes('row/#*') as t(c)
)
insert into Table1_History (ID, Name, OldValue, NewValue)
select
isnull(i.ID, d.ID) as ID,
isnull(i.Name, d.Name) as Name,
d.Value,
i.Value
from cte_i as i
full outer join cte_d as d on d.ID = i.ID and d.Name = i.Name
where
not exists (select i.value intersect select d.value)
end;
sql fiddle demo
In this post:
How to refer to "New", "Old" row for Triggers in SQL server?
It is mentioned that/how you can access the original and the new values, and if you can access, you can compare them.
"INSERTED is the new row on INSERT/UPDATE. DELETED is the deleted row on DELETE and the updated row on UPDATE (i.e. the old values before the row was updated)"
Using SQL server (2012)
I have a table - TABLE_A with columns
(id, name, category, type, reference)
id - is a primary key, and is controlled by a separte table (table_ID) that holds the the primary next available id. Usually insertions are made from the application side (java) that takes care of updating this id to the next one after every insert. (through EJBs or manually, etc..)
However,
I would like to to write stored procedure (called from java application) that
- finds records in this table where (for example) reference = 'AAA' (passed as
parameter)
- Once multiple records found (all with same reference 'AAA', I want it to INSERT new
records with new ID's and reference = 'BBB', and other columns (name, category, type)
being same as in the found list.
I am thinking of a query similar to this
INSERT INTO table_A
(ID
,NAME
,CATEGORY
,TYPE,
,Reference)
VALUES
(
**//current_nextID,**
(select NAME
from TABLE_A
where REFENCE in (/*query returning value 'AAA' */),
(select CATEGORY
from TABLE_A
where REFENCE in (/*query returning value 'AAA' */),
(select TYPE
from TABLE_A
where REFENCE in (/*query returning value 'AAA' */),
'BBB - NEW REFERENCE VALUE BE USED'
)
Since, I don't know how many records I will be inserting , that is how many items in the result set of a criteria query
select /*field */
from TABLE_A
where REFENCE in (/*query returning value 'AAA' */),
I don't know how to come up with the value of ID, on every record. Can anyone suggest anything, please ?
It's not clear from your question how sequencing is handled but you can do something like this
CREATE PROCEDURE copybyref(#ref VARCHAR(32)) AS
BEGIN
-- BEGIN TRANSACTION
INSERT INTO tablea (id, name, category, type, reference)
SELECT value + rnum, name, category, type, 'BBB'
FROM
(
SELECT t.*, ROW_NUMBER() OVER (ORDER BY id) rnum
FROM tablea t
WHERE reference = 'AAA'
) a CROSS JOIN
(
SELECT value
FROM sequence
WHERE table_id = 'tablea'
) s
UPDATE sequence
SET value = value + ##ROWCOUNT + 1
WHERE table_id = 'tablea'
-- COMMIT TRANSACTION
END
Sample usage:
EXEC copybyref 'AAA';
Here is SQLFiddle demo
I want to create a trigger to detect whether a row has been changed in SQL Server. My current approach is to loop through each field, apply COLUMNS_UPDATED() to detect whether UPDATE has been called, then finally compare the values of this field for the same row (identified by PK) in inserted vs deleted.
I want to eliminate the looping from the procedure. Probably I can dump the content of inserted and deleted into one table, group on all columns, and pick up the rows with count=2. Those rows will count as unchanged.
The end goal is to create an audit trail:
1) Track user and timestamp
2) Track insert, delete and REAL changes
Any suggestion is appreciated.
Instead of looping you can use BINARY_CHECKSUM to compare entire rows between the inserted and deleted tables, and then act accordingly.
Example
Create table SomeTable(id int, value varchar(100))
Create table SomeAudit(id int, Oldvalue varchar(100), NewValue varchar(100))
Create trigger tr_SomTrigger on SomeTable for Update
as
begin
insert into SomeAudit
(Id, OldValue, NewValue)
select i.Id, d.Value, i.Value
from
(
Select Id, Value, Binary_CheckSum(*) Version from Inserted
) i
inner join
(
Select Id, Value, Binary_CheckSum(*) Version from Deleted
) d
on i.Id = d.Id and i.Version <> d.Version
End
Insert into sometable values (1, 'this')
Update SomeTable set Value = 'That'
Select * from SomeAudit