I have an SQL Job which takes two values out of a table and merges them together.
After the merge, these two values are deleted.
If the table is empty, the Job produces Error-Messages.
How can i prevent this?
Kind regards
You would add a condition to check for an empty table:
if exists (select 1 from t)
begin
-- the merge code or procedure call here
end;
Related
I have read only access to a DB2 database and i want to create an "in flight/on the fly" or temporary table which only exists within the SQL, then populate it with values, then compare the results against an existing table.
So far I am trying to validate the premise and have the following query compiling but failing to pick anything up with the select statement.
Can anyone assist me with what I am doing wrong or advise on what I am attempting to do is possible? (Or perhaps a better way of doing things)
Thanks
Justin
--Create a table that only exists within the query
DECLARE GLOBAL TEMPORARY TABLE SESSION.TEMPEVENT (EVENT_TYPE INTEGER);
--Insert a value into the temporary table
INSERT INTO SESSION.TEMPEVENT (EVENT_TYPE) VALUES ('1');
--Select all values from the temporary table
SELECT * FROM SESSION.TEMPEVENT;
--Drop the table so the query can be run again
DROP TABLE SESSION.TEMPEVENT;
If you look at the syntax diagram of the DECLARE GLOBAL TEMPORARY TABLE statement, you may note the following block:
.-ON COMMIT DELETE ROWS---.
--●--+-------------------------+--●----------------------------
'-ON COMMIT PRESERVE ROWS-'
This means that ON COMMIT DELETE ROWS is default behavior. If you issue your statements with the autocommit mode turned on, the commit statement issued automatically after each statement implicitly, which deletes all the rows in your DGTT.
If you want DB2 not to delete rows in DGTT upon commit, you have to explicitly specify the ON COMMIT PRESERVE ROWS clause in the DGTT declaration.
I have the following table
Data --Table name
ID -- Identity column
PCode -- Postal Code
I created the following trigger:
CREATE TRIGGER Trig
ON Data
FOR INSERT
AS
BEGIN
Select * from inserted
END
And inserted the following values
INSERT INTO Data VALUES (125)
INSERT INTO Data VALUES (126)
INSERT INTO Data VALUES (127)
It shows this:
But I was expecting something like this:
After the 1st insertion, the trigger is executed -> one row is shown in the inserted table.
After the 2nd insertion, the trigger is executed -> two rows are shown in the inserted table.
After the 3rd insertion, the trigger is executed -> three rows are shown in the inserted table.
According to msdn.microsoft all the rows inserted are in this table.
How can I access the inserted table so that I can see all the expected rows and not separately?
You can not. From the Use the inserted and deleted Tables article on microsoft.com, you can read:
The inserted table stores copies of the affected rows during INSERT and UPDATE statements.
That means that the inserted table will only contain rows for the current INSERT or UPDATE statement.
If you do want to see all rows for several such INSERT or UPDATE statements, you will have to store these rows in a table you created yourself.
There are 2 table available in a trigger, the inserted and the deleted. Each update on table XXX is actually a delete row X from XXX then an insert of row X in table XXX. So the inserted inside the trigger is a copy of what got inserted. You can do a lot with a trigger, but triggers are dangerous.
For example, on a performance gig, I found a huge SP being run by a trigger, we dropped it and the database came back online. Or another example, if you do a trigger wrong to audit logins, you can down the server.
As TT mentioned, if you want to see all the inserted records then you need to change your Trigger to something like this:
CREATE TRIGGER Trig
ON Data
FOR INSERT
AS
BEGIN
Select * into "tablename"
from
(Select * from inserted) Ins
END
I'm currently working with DB2 v10.5 and I need to log all SQL Statements that occurr on a specific table (A). For instance, if an INSERT occur on table A, I need to "grab" that SQL Statement and log it into another table (A_LOGGER).
The solution I've reached was to create a TRIGGER (for each CRUD operation) over table A that looks into the table SYSIBMADM.SNAPDYN_SQL and try to save the last executed statement on table A.
Example for the INSERT Statement:
CREATE OR REPLACE TRIGGER OPERATIONS_INSERT_TRIGGER
AFTER INSERT ON REPLDEMO.OPERATIONS
REFERENCING NEW AS OBJ
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
INSERT INTO REPLDEMO.OPERATIONS_LOGGER (LAST_SQL_STATEMENT)
SELECT STMT_TEXT FROM SYSIBMADM.SNAPDYN_SQL
WHERE 1=1
AND STMT_TEXT LIKE 'INSERT INTO REPLDEMO.OPERATIONS (%'
AND STMT_TEXT NOT LIKE '%?%';
END%
But looking at table SYSIBMADM.SNAPDYN_SQL is not the best solution because you cannot guarantee that you'll get the truly last SQL Statement executed on table A. Moreover, if there's a massive number of sql statements executed over table A in a very short period the TRIGGER will replicate many of the statements already saved on A_LOGGER.
So, my question is: Is there an effective and secure way to get the last SQL Statement executed over a table?
Thanks.
I've created a Stored Procedure that refreshes the data in a table. It first re-loads the entire table. Next, several filters are applied. (Example: the column 'Model' must equal 'W'; all rows with model 'B' are deleted.) This happens after the table has been loaded (and not during) because I want to log how many rows are deleted because of each individual filter. After the filters have been applied, some columns contain the same value in every row (the other values were deleted in the filtering process). These columns are now useless, so I want to delete them.
This seems to be problematic for SQL Server. When given the command to execute the SP, it indicates that the columns it is supposed to remove in its final step do not currently exist and refuses to run. That is technically correct, the columns currently don't exist, but they will be created by the SP itself.
Some mockup code:
CREATE PROCEDURE dbo.Procedure AS (
DROP TABLE dbo.Table
SELECT * INTO dbo.Table FROM dbo.View
INSERT INTO dbo.Log VALUES (GETDATE(),(SELECT COUNT(1) FROM dbo.Table))
DELETE FROM dbo.Table WHERE Model <> 'W'
INSERT INTO dbo.Log VALUES (GETDATE(),(SELECT COUNT(1) FROM dbo.Table))
ALTER TABLE dbo.Table DROP COLUMN Model
)
Error code when executing:
[2016-09-02 12:25:20] [S0001][207] Invalid column name 'Model'.
How do I circumvent this problem and get the SP to run?
If I understand correctly, you can use dynamic SQL:
exec sp_executesql 'ALTER TABLE dbo.Table DROP COLUMN Model';
Syntax to remove any column from table in SQL Server is
ALTER TABLE TableName DROP COLUMN ColumnName ;
This may be cause for issue.
Can you check one more time for the existency of the column 'Model' exists in the view.
because i have tried with the same scenario and its works for me..
I'm trying to figure out how to determine if a table has been affected by a number of processes that run in sequence, and need to know what the state of the table is before and after each runs. What I've been trying to do is run some SQL before all the processes run that saves a before checksum of every table in the db to a table, then running it again when each ends and updating the table row with an after checksum. After all the processes are over, I compare the checksums and get all rows where before <> after.
Only problem is that I'm not the best guy for SQL, and am a little lost. Here's where I'm at right now:
select checksum_agg(binary_checksum(*)) from empcomp with (nolock)
create table Test_CheckSum_Record ( TableName varchar(max), CheckSum_Before int, CheckSum_After int)
SELECT name into #TempNames
FROM sys.Tables where is_ms_shipped = 0
And the pseudocode for what I want to do is something like
foreach(var name in #TempNames)
insert into Test_CheckSum_Record(name, ExecuteSQL N'select checksum_agg(binary_checksum(*)) from ' + name + ' with (nolock)', null)
But how does one do this?
Judging by the comments you need to create a trigger that handles all CRUD operations and just places a flag
Syntax is
Create TRIGGER [TriggerName] ON [TableName]
AFTER UPDATE, AFTER Delete, AFTER UPDATE
In the trigger you can do a
select CHECKSUM_AGG([Columns you want to compare against])
from [ParentTable] store that value in a variable and check it against the checksum table before column. If it does not exist you add a new entry with the DELETED tables checksum_AGG value as the before entry
Please note the choice not to use the inserted table is just preference for me on calculated columns
I will edit later when I have more time to add code