Delete statement after executing a SQL insert twice - sql

From an Oracle database, I have executed an insert SQL statement twice in a production environment.
Sadly, the rollback option did not seem to work.
The insert statement was:
INSERT INTO MAE_INT.T_INT_APPLICATION (INAP_IDENT, INAP_PARAM, INAP_VALEUR, INAP_DATE)
VALUES ((SELECT MAX(INAP_IDENT)+1 FROM MAE_INT.T_INT_APPLICATION), 'monitoring', 'true', '10/06/2016');
COMMIT;
I guess the only option now is to make a delete statement for the double.
Anyone can help with that? Not sure how to write it

You can delete max(INAP_IDENT) like below, thereby leaving you with first insert statement only.
NOTE: TEST IT IN DEV/UAT ENVIRONMENT FIRST
delete from MAE_INT.T_INT_APPLICATION
where INAP_IDENT=
(SELECT MAX(INAP_IDENT) FROM MAE_INT.T_INT_APPLICATION);
Before committing, check if you dont have duplicate entry.

Related

SQL Server gives error to unreachable code

We have the following case:
We have a SQL Server database with a table CL_Example and another database with a view with the same name CL_Example. This view is created by using more than one table with inner join.
The structure of the CL_Example view and CL_Example table is the same.
We have a SQL script which will be executed on both of these databases. In case it finds CL_Example as a table, it should insert the data, and if it finds CL_Example as view, then it should not insert the data.
But when we are executing the script on the database where CL_Example is a view, we get the following error:
Update or insert of view or function 'dbo.CL_Example' failed because it contains a derived or constant field.
This bug is getting generated even if the insert statement is unreachable for the database where CL_Example is the view. Can we prohibit this error and continue to execute the script in both databases?
SQL Server compiles all the statements in a batch before executing.
This will be a compile time error and stop the batch compiling (referencing non existent objects can cause the statement to have deferred compilation where it tries again at statement execution time but this does not apply here).
If the batch may need to run such problem statements you need to move them into a different scope so they are only compiled if they meet your flow of control conditions.
Often the easiest way of doing that is to wrap the whole problem statement in EXEC('')
IF OBJECT_ID('dbo.CL_Example', 'U') IS NOT NULL /*It is a table*/
EXEC('UPDATE dbo.CL_Example ....')
Or use sys.sp_executesql if it isn't just a static string and you need to pass parameters to it.
May I suggest another approach to your problem. Instead of fixing query to not insert if CL_Example is view, you can create trigger on view INSTEAD OF INSERT that does nothing and you don't have to worry about insert queries.
EXAMPLE:
create view vw_test
as
select 1 as a, 2 as b;
go
select * from vw_test;
go
if (1=2)
begin
insert into vw_test (a,b) values (3,4); -- unreachable but fails
end
go
create trigger tg_fake_insert on vw_test
instead of insert
as
begin
set nocount on;
end;
go
if (1=2)
begin
insert into vw_test (a,b) values (3,4); --unreachable, does not fail
end
else
begin
insert into vw_test (a,b) values (3,4); --actually works
end
go
You might even want to put some actual logic in that trigger and divert data somewhere else?
EDIT: To put additional thought in - if you are using view to replace table in order for legacy code to work - you SHOULD handle insert/update/delete situations that may occur and good way to do that is with instead of triggers.

DB2 INSERT INTO … SELECT lock

There is an application Application1 which is issuing insert statement that we are using in Prod
INSERT INTO Table1
(SELECT FROM Table2
WHERE conditions are true)
There is another application Apllication2 which is doing a select query on Table2
SELECT FROM Table2
WHERE conditions are true
with ur
Now whenever the insert query is running, the second query is running very slow, sometimes getting read timed out.
I tried to find if the Table2 was getting locked due to being part of the insert statement, but I couldn't find any concrete evidence.
I did find something for MySQL
How to improve INSERT INTO ... SELECT locking behavior
but nothing for DB2.
Can somebody please help me understand the cause of slowness ?
The insert statement is almost certainly issuing locks against table2. However, if your second statement has with UR then it is likely it can avoid those locks. If you have a test system you can try setting the registry variable DB2_WORKLOAD to WAS (which sets all these: https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.admin.perf.doc/doc/c0011218.html).
I suggest using dsmtop or MONREPORT.DBSUMMARY (https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.apdv.sqlpl.doc/doc/r0056370.html) to determine where time is actually being spent for the read-only query.
You can use second query as below:
SELECT FROM Table2 WITH(NOLOCK)
WHERE conditions are true
Note: But It will give you dirty read.

Is insert query execute without opening transaction?

When we execute save, update or delete operation, we open a transaction and after completing operation we close transaction following a commit. If we run insert query with single or multiple row values, then what will happen?
We use BEGIN TRAN in DELETE or UPDATE statement to make sure that our statement is correct and We get the correct number of results returned.
some developers doesn't use it in session or in batches , because they already try their statement and exactly know what it will do.
I advise you to visit this URL , It's really useful:
https://www.mssqltips.com/sqlservertutorial/3305/what-does-begin-tran-rollback-tran-and-commit-tran-mean/

Is it possible to go back to line after error in PL/SQL?

I have tons of insert statements.
I want to ignore errors during the execution of these lines, and I prefer not to wrap each line seperately.
Example:
try
insert 1
insert 2
insert 3
exception
...
I want that if an exception was thrown in insert 1, it will ignore it and go back to perform insert 2, and etc.
How can I do it?
I'm looking for something like "Resume next" in VB.
If you can move all the inserts to a sql script and then run them in sql*plus, then every insert will run by its own and the script will continue to run.
If you are using plsqldeveloper (you taged it), then open a new command window (wich is exactly like a sql script run by sql*plus) and put your staements like this:
insert into table your_table values(1,'aa');
insert into table your_table values(2/0,'bb');
insert into table your_table values(3,'cc');
commit;
Even though statement (2) will throw an execption, since it's not in a block it will continue to the next command.
UPDATE: According to #CheranShunmugavel comment, add
WHENEVER SQLERROR CONTINUE NONE
at the top of the script (especially if your using sql*plus which there the difault is exit).
You'd need to wrap each INSERT statement with its own exception handler. If you have "tons" of insert statements where any of the statements can fail, however, I would tend to suspect that you're approaching the problem incorrectly. Where are these statements coming from? Could you pull the data directly from that source system? Could you execute the statements in a loop rather than listing each one? Could you load the data first into a set of staging tables that will ensure that all the INSERT statements succeed (i.e. no constraints, all columns defined as VARCHAR2(4000), etc.) and then write a single SQL statement that moves the data into the actual destination table with appropriate validations and exception handling?
Use the log error clause. More information in Avoiding Bulk INSERT Failures with DML Error Logging

After Delete Trigger Fires Only After Delete?

I thought "after delete" meant that the trigger is not fired until after the delete has already taken place, but here is my situation...
I made 3, nearly identical SQL CLR after delete triggers in C#, which worked beautifully for about a month. Suddenly, one of the three stopped working while an automated delete tool was run on it.
By stopped working, I mean, records could not be deleted from the table via client software. Disabling the trigger caused deletes to be allowed, but re-enabling it interfered with the ability to delete.
So my question is 'how can this be the case?' Is it possible the tool used on it futzed up the memory? It seems like even if the trigger threw an exception, if it is AFTER delete, shouldn't the records be gone?
All the trigger looks like is this:
ALTER TRIGGER [sysdba].[AccountTrigger] ON [sysdba].[ACCOUNT] AFTER DELETE AS
EXTERNAL NAME [SQL_IO].[SQL_IO.WriteFunctions].[AccountTrigger]
GO
The CLR trigger does one select and one insert into another database. I don't yet know if there are any errors from SQL Server Mgmt Studio, but will update the question after I find out.
UPDATE:
Well after re-executing the same trigger code above, everything works again, so I may never know what if any error SSMS would give.
Also, there is no call to rollback anywhere in the trigger's code.
after means it just fires after the event, it can still be rolled back
example
create table test(id int)
go
create trigger trDelete on test after delete
as
print 'i fired '
rollback
do an insert
insert test values (1)
now delete the data
delete test
Here is the output from the trigger
i fired
Msg 3609, Level 16, State 1, Line 1
The transaction ended in the trigger. The batch has been aborted.
now check the table, and verify that nothing was deleted
select * from test
The CLR trigger does one select and
one insert into another database. I
don't yet know if there are any errors
from SQL Server Mgmt Studio, but will
update the question after I find out.
Suddenly, one of the three stopped
working while an automated delete tool
was run on it.
triggers fire per batch/statement not per row, is it possible that your trigger wasn't coded for multi-row operations and the automated tool deleted more than 1 row in the batch? Take a look at Best Practice: Coding SQL Server triggers for multi-row operations
Here is an example that will make the trigger fail without doing an explicit rollback
alter trigger trDelete on test after delete
as
print 'i fired '
declare #id int
select #id = (select id from deleted)
GO
insert some rows
insert test values (1)
insert test values (2)
insert test values (3)
run this
delete test
i fired
Msg 512, Level 16, State 1, Procedure trDelete, Line 6
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
The statement has been terminated.
check the table
select * from test
nothing was deleted
An error in the AFTER DELETE trigger will roll-back the transaction. It is after they are deleted but before the change is committed. Is there any particular reason you are using a CLR trigger for this? It seems like something that a pure SQL trigger ought to be able to do in a possibly more lightweight manner.
Well you shouldn't be doing a select in trigger (who will see the results) and if all you are doing is an insert it shouldn't be a CLR trigger either. CLR is not generally a good thing to have in a trigger, far better to use t-SQL code in a trigger unless you need to do something that t-sql can't handle which is probably a bad idea in a trigger anyway.
Have you reverted to the last version you have in source control? Perhaps that would clear the problem if it has gotten corrupted.