We have the following case:
We have a SQL Server database with a table CL_Example and another database with a view with the same name CL_Example. This view is created by using more than one table with inner join.
The structure of the CL_Example view and CL_Example table is the same.
We have a SQL script which will be executed on both of these databases. In case it finds CL_Example as a table, it should insert the data, and if it finds CL_Example as view, then it should not insert the data.
But when we are executing the script on the database where CL_Example is a view, we get the following error:
Update or insert of view or function 'dbo.CL_Example' failed because it contains a derived or constant field.
This bug is getting generated even if the insert statement is unreachable for the database where CL_Example is the view. Can we prohibit this error and continue to execute the script in both databases?
SQL Server compiles all the statements in a batch before executing.
This will be a compile time error and stop the batch compiling (referencing non existent objects can cause the statement to have deferred compilation where it tries again at statement execution time but this does not apply here).
If the batch may need to run such problem statements you need to move them into a different scope so they are only compiled if they meet your flow of control conditions.
Often the easiest way of doing that is to wrap the whole problem statement in EXEC('')
IF OBJECT_ID('dbo.CL_Example', 'U') IS NOT NULL /*It is a table*/
EXEC('UPDATE dbo.CL_Example ....')
Or use sys.sp_executesql if it isn't just a static string and you need to pass parameters to it.
May I suggest another approach to your problem. Instead of fixing query to not insert if CL_Example is view, you can create trigger on view INSTEAD OF INSERT that does nothing and you don't have to worry about insert queries.
EXAMPLE:
create view vw_test
as
select 1 as a, 2 as b;
go
select * from vw_test;
go
if (1=2)
begin
insert into vw_test (a,b) values (3,4); -- unreachable but fails
end
go
create trigger tg_fake_insert on vw_test
instead of insert
as
begin
set nocount on;
end;
go
if (1=2)
begin
insert into vw_test (a,b) values (3,4); --unreachable, does not fail
end
else
begin
insert into vw_test (a,b) values (3,4); --actually works
end
go
You might even want to put some actual logic in that trigger and divert data somewhere else?
EDIT: To put additional thought in - if you are using view to replace table in order for legacy code to work - you SHOULD handle insert/update/delete situations that may occur and good way to do that is with instead of triggers.
Related
I have three stored procedures Sp1, Sp2 and Sp3.
The first one (Sp1) will execute the second one (Sp2) and save returned data into #tempTB1 and the second one will execute the third one (Sp3) and save data into #tempTB2.
If I execute the Sp2 it will work and it will return me all my data from the Sp3, but the problem is in the Sp1, when I execute it it will display this error:
INSERT EXEC statement cannot be nested
I tried to change the place of execute Sp2 and it display me another error:
Cannot use the ROLLBACK statement
within an INSERT-EXEC statement.
This is a common issue when attempting to 'bubble' up data from a chain of stored procedures. A restriction in SQL Server is you can only have one INSERT-EXEC active at a time. I recommend looking at How to Share Data Between Stored Procedures which is a very thorough article on patterns to work around this type of problem.
For example a work around could be to turn Sp3 into a Table-valued function.
This is the only "simple" way to do this in SQL Server without some giant convoluted created function or executed sql string call, both of which are terrible solutions:
create a temp table
openrowset your stored procedure data into it
EXAMPLE:
INSERT INTO #YOUR_TEMP_TABLE
SELECT * FROM OPENROWSET ('SQLOLEDB','Server=(local);TRUSTED_CONNECTION=YES;','set fmtonly off EXEC [ServerName].dbo.[StoredProcedureName] 1,2,3')
Note: You MUST use 'set fmtonly off', AND you CANNOT add dynamic sql to this either inside the openrowset call, either for the string containing your stored procedure parameters or for the table name. Thats why you have to use a temp table rather than table variables, which would have been better, as it out performs temp table in most cases.
OK, encouraged by jimhark here is an example of the old single hash table approach: -
CREATE PROCEDURE SP3 as
BEGIN
SELECT 1, 'Data1'
UNION ALL
SELECT 2, 'Data2'
END
go
CREATE PROCEDURE SP2 as
BEGIN
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
INSERT INTO #tmp1
EXEC SP3
else
EXEC SP3
END
go
CREATE PROCEDURE SP1 as
BEGIN
EXEC SP2
END
GO
/*
--I want some data back from SP3
-- Just run the SP1
EXEC SP1
*/
/*
--I want some data back from SP3 into a table to do something useful
--Try run this - get an error - can't nest Execs
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
DROP TABLE #tmp1
CREATE TABLE #tmp1 (ID INT, Data VARCHAR(20))
INSERT INTO #tmp1
EXEC SP1
*/
/*
--I want some data back from SP3 into a table to do something useful
--However, if we run this single hash temp table it is in scope anyway so
--no need for the exec insert
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
DROP TABLE #tmp1
CREATE TABLE #tmp1 (ID INT, Data VARCHAR(20))
EXEC SP1
SELECT * FROM #tmp1
*/
My work around for this problem has always been to use the principle that single hash temp tables are in scope to any called procs. So, I have an option switch in the proc parameters (default set to off). If this is switched on, the called proc will insert the results into the temp table created in the calling proc. I think in the past I have taken it a step further and put some code in the called proc to check if the single hash table exists in scope, if it does then insert the code, otherwise return the result set. Seems to work well - best way of passing large data sets between procs.
This trick works for me.
You don't have this problem on remote server, because on remote server, the last insert command waits for the result of previous command to execute. It's not the case on same server.
Profit that situation for a workaround.
If you have the right permission to create a Linked Server, do it.
Create the same server as linked server.
in SSMS, log into your server
go to "Server Object
Right Click on "Linked Servers", then "New Linked Server"
on the dialog, give any name of your linked server : eg: THISSERVER
server type is "Other data source"
Provider : Microsoft OLE DB Provider for SQL server
Data source: your IP, it can be also just a dot (.), because it's localhost
Go to the tab "Security" and choose the 3rd one "Be made using the login's current security context"
You can edit the server options (3rd tab) if you want
Press OK, your linked server is created
now your Sql command in the SP1 is
insert into #myTempTable
exec THISSERVER.MY_DATABASE_NAME.MY_SCHEMA.SP2
Believe me, it works even you have dynamic insert in SP2
I found a work around is to convert one of the prods into a table valued function. I realize that is not always possible, and introduces its own limitations. However, I have been able to always find at least one of the procedures a good candidate for this. I like this solution, because it doesn't introduce any "hacks" to the solution.
I encountered this issue when trying to import the results of a Stored Proc into a temp table, and that Stored Proc inserted into a temp table as part of its own operation. The issue being that SQL Server does not allow the same process to write to two different temp tables at the same time.
The accepted OPENROWSET answer works fine, but I needed to avoid using any Dynamic SQL or an external OLE provider in my process, so I went a different route.
One easy workaround I found was to change the temporary table in my stored procedure to a table variable. It works exactly the same as it did with a temp table, but no longer conflicts with my other temp table insert.
Just to head off the comment I know that a few of you are about to write, warning me off Table Variables as performance killers... All I can say to you is that in 2020 it pays dividends not to be afraid of Table Variables. If this was 2008 and my Database was hosted on a server with 16GB RAM and running off 5400RPM HDDs, I might agree with you. But it's 2020 and I have an SSD array as my primary storage and hundreds of gigs of RAM. I could load my entire company's database to a table variable and still have plenty of RAM to spare.
Table Variables are back on the menu!
I recommend to read this entire article. Below is the most relevant section of that article that addresses your question:
Rollback and Error Handling is Difficult
In my articles on Error and Transaction Handling in SQL Server, I suggest that you should always have an error handler like
BEGIN CATCH
IF ##trancount > 0 ROLLBACK TRANSACTION
EXEC error_handler_sp
RETURN 55555
END CATCH
The idea is that even if you do not start a transaction in the procedure, you should always include a ROLLBACK, because if you were not able to fulfil your contract, the transaction is not valid.
Unfortunately, this does not work well with INSERT-EXEC. If the called procedure executes a ROLLBACK statement, this happens:
Msg 3915, Level 16, State 0, Procedure SalesByStore, Line 9 Cannot use the ROLLBACK statement within an INSERT-EXEC statement.
The execution of the stored procedure is aborted. If there is no CATCH handler anywhere, the entire batch is aborted, and the transaction is rolled back. If the INSERT-EXEC is inside TRY-CATCH, that CATCH handler will fire, but the transaction is doomed, that is, you must roll it back. The net effect is that the rollback is achieved as requested, but the original error message that triggered the rollback is lost. That may seem like a small thing, but it makes troubleshooting much more difficult, because when you see this error, all you know is that something went wrong, but you don't know what.
I had the same issue and concern over duplicate code in two or more sprocs. I ended up adding an additional attribute for "mode". This allowed common code to exist inside one sproc and the mode directed flow and result set of the sproc.
what about just store the output to the static table ? Like
-- SubProcedure: subProcedureName
---------------------------------
-- Save the value
DELETE lastValue_subProcedureName
INSERT INTO lastValue_subProcedureName (Value)
SELECT #Value
-- Return the value
SELECT #Value
-- Procedure
--------------------------------------------
-- get last value of subProcedureName
SELECT Value FROM lastValue_subProcedureName
its not ideal, but its so simple and you don't need to rewrite everything.
UPDATE:
the previous solution does not work well with parallel queries (async and multiuser accessing) therefore now Iam using temp tables
-- A local temporary table created in a stored procedure is dropped automatically when the stored procedure is finished.
-- The table can be referenced by any nested stored procedures executed by the stored procedure that created the table.
-- The table cannot be referenced by the process that called the stored procedure that created the table.
IF OBJECT_ID('tempdb..#lastValue_spGetData') IS NULL
CREATE TABLE #lastValue_spGetData (Value INT)
-- trigger stored procedure with special silent parameter
EXEC dbo.spGetData 1 --silent mode parameter
nested spGetData stored procedure content
-- Save the output if temporary table exists.
IF OBJECT_ID('tempdb..#lastValue_spGetData') IS NOT NULL
BEGIN
DELETE #lastValue_spGetData
INSERT INTO #lastValue_spGetData(Value)
SELECT Col1 FROM dbo.Table1
END
-- stored procedure return
IF #silentMode = 0
SELECT Col1 FROM dbo.Table1
Declare an output cursor variable to the inner sp :
#c CURSOR VARYING OUTPUT
Then declare a cursor c to the select you want to return.
Then open the cursor.
Then set the reference:
DECLARE c CURSOR LOCAL FAST_FORWARD READ_ONLY FOR
SELECT ...
OPEN c
SET #c = c
DO NOT close or reallocate.
Now call the inner sp from the outer one supplying a cursor parameter like:
exec sp_abc a,b,c,, #cOUT OUTPUT
Once the inner sp executes, your #cOUT is ready to fetch. Loop and then close and deallocate.
If you are able to use other associated technologies such as C#, I suggest using the built in SQL command with Transaction parameter.
var sqlCommand = new SqlCommand(commandText, null, transaction);
I've created a simple Console App that demonstrates this ability which can be found here:
https://github.com/hecked12/SQL-Transaction-Using-C-Sharp
In short, C# allows you to overcome this limitation where you can inspect the output of each stored procedure and use that output however you like, for example you can feed it to another stored procedure. If the output is ok, you can commit the transaction, otherwise, you can revert the changes using rollback.
On SQL Server 2008 R2, I had a mismatch in table columns that caused the Rollback error. It went away when I fixed my sqlcmd table variable populated by the insert-exec statement to match that returned by the stored proc. It was missing org_code. In a windows cmd file, it loads result of stored procedure and selects it.
set SQLTXT= declare #resets as table (org_id nvarchar(9), org_code char(4), ^
tin(char9), old_strt_dt char(10), strt_dt char(10)); ^
insert #resets exec rsp_reset; ^
select * from #resets;
sqlcmd -U user -P pass -d database -S server -Q "%SQLTXT%" -o "OrgReport.txt"
I am trying to do a trigger, that select values into some tables, and then insert them into another table.
So, for now I got this. There is a lot of columns, so I don't copy them, it is only varchar2 values, and this part works, so I don't think it is useful :
create or replace
TRIGGER TRIGGER_FICHE
AFTER INSERT ON T_AG
BEGIN
declare
begin
INSERT INTO t_ag_hab#DBLINK_DEV
()
values
();
/*commit;*/
end;
END;
Stored procedure where trigger will be called(again a lot of parameters, not relevant to copy them :
INSERT INTO T_AG()
VALUES
();
commit work;
The thing is, we cannot do commit into a trigger, I read that, and understand it.
But, how can I see an update of my table, with the new value?
When the process is runnign, there is nor error, but I don't see the new line into t_ag_hab.
I know it's not very clear, but I don't know how to explain it other way.
How can I fix this?,
Because you're inserting into a remove table via a database link, you have a distributed transaction:
... distributed transaction processing is more complicated because the database must coordinate the committing or rolling back of the changes in a transaction as an atomic unit. The entire transaction must commit or roll back.
When you commit, you're committing both the local insert and the remote insert performed by your trigger, as an atomic unit. You cannot commit one without the other, and you don't have to do anything extra to commit the remote change:
The two-phase commit mechanism is transparent to users who issue distributed transactions. In fact, users need not even know the transaction is distributed. A COMMIT statement denoting the end of a transaction automatically triggers the two-phase commit mechanism. No coding or complex statement syntax is required to include distributed transactions within the body of a database application.
If you can't see the inserted data from the remote database afterwards then something else has deleted it after the commit, or more likely you're looking at the wrong database.
One slight downside (though also a feature) of a database link is that it hides the details of where the work is being done. You can drop and recreate a link to make your code update a different target database without having to modify the code itself. But that means your code doesn't know where the insert is actually going - you'd need to check the data dictionary to see where the link is pointing. And even then you might not know as the link can be using a TNS alias to identify the database, and changes to the tnsnames.ora aren't visible from within the database.
If you can see the data after committing by querying t_ag_ab#dblink_dev from the same database you ran your procedure, but you can't see if when queried locally from the database you expect that to be pointing to, then the link isn't pointing where you think it is. The insert is going to one database, and you are performing your query against a different one. Only you can decide which is the 'correct' database though; and either redefine the link (or TNS entry, if appropriate), or change where you're doing the query.
I am not able to understand you requirement clearly. For updating records in main table and insert the old records in audit table. we can use the below query as a trigger.(MS-SQL)
Create trigger trg_update ON T_AGENT
AFTER UPDATE AS
BEGIN
UPDATE Tab1
SET COL1 = I.COL1, COL2=I.COL2
FROM INSERTED I INNER JOIN Tab1 ON I.COL3=Tab1.Col3
INSERT Tab1_Audit(COL1,COL2,COL3)
SELECT Tab1 FROM DELETED
RETURN;
END;
So far what you presented is just for Inserting trigger. If you want to see the update action done try to add Update like this example.
SQL> CREATE OR REPLACE TRIGGER validate_update
2 AFTER INSERT OR UPDATE ON T_AGENT
3 FOR EACH ROW
4 BEGIN
5 IF UPDATING('ACCOUNT_ID') THEN -- do something like this when updating
6 DBMS_OUTPUT.put_line ('ERROR'); -- add your action here
7 ELSIF INSERTING THEN
8 INSERT INTO t_ag_hab#DBLINK_DEV() values();
9 END IF;
10 END;
11 /
Trigger created.
I have tons of insert statements.
I want to ignore errors during the execution of these lines, and I prefer not to wrap each line seperately.
Example:
try
insert 1
insert 2
insert 3
exception
...
I want that if an exception was thrown in insert 1, it will ignore it and go back to perform insert 2, and etc.
How can I do it?
I'm looking for something like "Resume next" in VB.
If you can move all the inserts to a sql script and then run them in sql*plus, then every insert will run by its own and the script will continue to run.
If you are using plsqldeveloper (you taged it), then open a new command window (wich is exactly like a sql script run by sql*plus) and put your staements like this:
insert into table your_table values(1,'aa');
insert into table your_table values(2/0,'bb');
insert into table your_table values(3,'cc');
commit;
Even though statement (2) will throw an execption, since it's not in a block it will continue to the next command.
UPDATE: According to #CheranShunmugavel comment, add
WHENEVER SQLERROR CONTINUE NONE
at the top of the script (especially if your using sql*plus which there the difault is exit).
You'd need to wrap each INSERT statement with its own exception handler. If you have "tons" of insert statements where any of the statements can fail, however, I would tend to suspect that you're approaching the problem incorrectly. Where are these statements coming from? Could you pull the data directly from that source system? Could you execute the statements in a loop rather than listing each one? Could you load the data first into a set of staging tables that will ensure that all the INSERT statements succeed (i.e. no constraints, all columns defined as VARCHAR2(4000), etc.) and then write a single SQL statement that moves the data into the actual destination table with appropriate validations and exception handling?
Use the log error clause. More information in Avoiding Bulk INSERT Failures with DML Error Logging
I thought "after delete" meant that the trigger is not fired until after the delete has already taken place, but here is my situation...
I made 3, nearly identical SQL CLR after delete triggers in C#, which worked beautifully for about a month. Suddenly, one of the three stopped working while an automated delete tool was run on it.
By stopped working, I mean, records could not be deleted from the table via client software. Disabling the trigger caused deletes to be allowed, but re-enabling it interfered with the ability to delete.
So my question is 'how can this be the case?' Is it possible the tool used on it futzed up the memory? It seems like even if the trigger threw an exception, if it is AFTER delete, shouldn't the records be gone?
All the trigger looks like is this:
ALTER TRIGGER [sysdba].[AccountTrigger] ON [sysdba].[ACCOUNT] AFTER DELETE AS
EXTERNAL NAME [SQL_IO].[SQL_IO.WriteFunctions].[AccountTrigger]
GO
The CLR trigger does one select and one insert into another database. I don't yet know if there are any errors from SQL Server Mgmt Studio, but will update the question after I find out.
UPDATE:
Well after re-executing the same trigger code above, everything works again, so I may never know what if any error SSMS would give.
Also, there is no call to rollback anywhere in the trigger's code.
after means it just fires after the event, it can still be rolled back
example
create table test(id int)
go
create trigger trDelete on test after delete
as
print 'i fired '
rollback
do an insert
insert test values (1)
now delete the data
delete test
Here is the output from the trigger
i fired
Msg 3609, Level 16, State 1, Line 1
The transaction ended in the trigger. The batch has been aborted.
now check the table, and verify that nothing was deleted
select * from test
The CLR trigger does one select and
one insert into another database. I
don't yet know if there are any errors
from SQL Server Mgmt Studio, but will
update the question after I find out.
Suddenly, one of the three stopped
working while an automated delete tool
was run on it.
triggers fire per batch/statement not per row, is it possible that your trigger wasn't coded for multi-row operations and the automated tool deleted more than 1 row in the batch? Take a look at Best Practice: Coding SQL Server triggers for multi-row operations
Here is an example that will make the trigger fail without doing an explicit rollback
alter trigger trDelete on test after delete
as
print 'i fired '
declare #id int
select #id = (select id from deleted)
GO
insert some rows
insert test values (1)
insert test values (2)
insert test values (3)
run this
delete test
i fired
Msg 512, Level 16, State 1, Procedure trDelete, Line 6
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
The statement has been terminated.
check the table
select * from test
nothing was deleted
An error in the AFTER DELETE trigger will roll-back the transaction. It is after they are deleted but before the change is committed. Is there any particular reason you are using a CLR trigger for this? It seems like something that a pure SQL trigger ought to be able to do in a possibly more lightweight manner.
Well you shouldn't be doing a select in trigger (who will see the results) and if all you are doing is an insert it shouldn't be a CLR trigger either. CLR is not generally a good thing to have in a trigger, far better to use t-SQL code in a trigger unless you need to do something that t-sql can't handle which is probably a bad idea in a trigger anyway.
Have you reverted to the last version you have in source control? Perhaps that would clear the problem if it has gotten corrupted.
Executing the following statement with SQL Server 2005 (My tests are through SSMS) results in success upon first execution and failure upon subsequent executions.
IF OBJECT_ID('tempdb..#test') IS NULL
CREATE TABLE #test ( GoodColumn INT )
IF 1 = 0
SELECT BadColumn
FROM #test
What this means is that something is comparing the columns I am accessing in my select statement against the columns that exist on a table when the script is "compiled". For my purposes this is undesirable functionality. My question is if there is anything that can be done so that this code would execute successfully on every run, or if that is not possible perhaps someone could explain why the demonstrated functionality is desirable. The only solutions I have currently is to wrap the select with EXEC or select *, but I don't like either of those solution.
Thanks
If you put:
IF OBJECT_ID('tempdb..#test') IS NOT NULL
DROP TABLE #test
GO
At the start, then the problem will go away, as the batch will get parsed before the #test table exists.
What you're asking is for the system to recognise that "1=0" will always evaluate to false. If it were ever true (which could potentially be the case for most real-life conditions), then you'd probably want to know that you were about to run something that would cause failure.
If you drop the temporary table and then create a stored procedure that does the same:
CREATE PROC dbo.test
AS
BEGIN
IF OBJECT_ID('tempdb..#test') IS NULL
CREATE TABLE #test ( GoodColumn INT )
IF 1 = 0
SELECT BadColumn
FROM #test
END
Then this will happily be created, and you can run it as many times as you like.
Rob
Whether or not this behaviour is "desirable" from a programmer's point of view is debatable of course -- it basically comes down to the difference between statically typed and dynamically typed languages. From a performance point of view, it's desirable because SQL Server needs complete information in order to compile and optimize the execution plan (and also cache execution plans).
In a word, T-SQL is not an interpretted or dynamically typed language, and so you cannot write code like this. Your options are either to use EXEC, or to use another language and embed the SQL queries within it.
This problem is also visible in these situations:
IF 1 = 1
select dummy = GETDATE() into #tmp
ELSE
select dummy = GETDATE() into #tmp
Although the second statement is never executed the same error occurs.
It seems the query engine first level validation ignores all conditional statements.
You say you have problems with subsequent request and that is because the object already exits. It it recommended that you drop your temporary tables as soon as possible when you are done with it.
Read more about temporary table performance at:
SQL Server performance.com