Are there any cases where an INSERT in SQL (specifically Oracle-PL/SQL) can fail without an exception being thrown? I'm seeing checks in code after INSERT where it verifies that SQL%ROWCOUNT = 1 ELSE it raises its own user-defined exception. I don't see how that can ever happen.
It can't fail without an exception, no. Probably the developer who wrote the code didn't know that.
An after statement trigger could conceivably delete the row just inserted. And of course an INSERT...SELECT might find no rows to insert, and so would result in SQL%ROWCOUNT = 0.
In addition to the trigger-based issue #mcabral mentioned, you could have an insert that is successful but inserts other than 1 row. For example, the insert into blah(col1) select col2 from foo style of insert.
As #TonyAndrews and #GinoA mentioned, there are several ways an INSERT could return something other than exactly one row (triggers, INSERT INTO tablename SELECT... syntax).
But the bigger issue is that you're in PL/SQL. As such, the SQL%ROWCOUNT value can be used as a condition to determine the program execution flow including issuing COMMIT or ROLLBACK statements.
Even with just raising a user-defined exception, the calling PL/SQL block can handle the exception itself.
EDIT: Someone should modify the question title to indicate PL/SQL (as indicated in the question itself), since that's not the same thing as SQL scope the title suggests.
Related
I want to know how can i validate if my Insert was successful, for example i have this query:
INSERT INTO TEST VALUES ('TEST01');
I am using SQL SERVER 2008 EXPRESS and am running that from Java, but I am calling a Store procedure so I can't get a boolean answer.
You can use ##ROWCOUNT after your insert statement. it Returns the number of rows affected by the last statement.
if it returns grater than 0 it means your rows has been inserted successfully
There are a lot of ways to accomplish this.
Use a try catch block. Insert statement in the try block. Then handle the exception in catch block.
Use the global variable ##error to check if it is non-zero then the previous statement has resulted in error.
There might be other ways as well but these are the ones I can think of now.
These options will catch the error after the insert statement has run. You can also write some sql code to ensure that insert statement is executed only if the values pass data validation checks.
I have a file with INSERTs, UPDATEs, DELETEs. I want to execute each of the DML statements in this file, but in case any exception occurs, I want to print that exception and continue. Is there a simple solution for this? Below is a solution which involves wrapping each DML in an anonymous block and print the exception, but I think it is not simple (or elegant) enough:
BEGIN
<<DML statement goes here>>
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(DBMS_UTILITY.FORMAT_ERROR_BACKTRACE);
END;
Needless to say, this cannot be done (easily) for hundreds of DMLs.
A possible decision is to add error logging clause to your statements. To each of your statement you have to add following (for example for INSERT):
insert into my_table (...)
values (...)
LOG ERRORS INTO err$_my_table ('INSERT') REJECT LIMIT UNLIMITED;
Here err$_my_table is a table for error logging. To create it, execute (once per table) following:
begin
DBMS_ERRLOG.CREATE_ERROR_LOG ('MY_TABLE');
end;
/
Error logging clause suppresses any exception and put in error logging table all lines, which fired exceptions. After executing you can query these tables. They will also contain values of SQLCODE and SQLERRM functions. Disadvantages of this method - you need to change all your statements and create a logging table for each table.
More about clause in documentation.
I have created AFTER INSERT TRIGGER
Now if any case if an error occurs while executing Trigger. It should not effect Insert Operation on Triggered table.
In One word if any ERROR occurs in trigger it should Ignore it.
As I have used
BEGIN TRY
END TRY
BEGIN CATCH
END CATCH
But it give following error message and Rolled back Insert operation on Triggered table
An error was raised during trigger execution. The batch has been
aborted and the user transaction, if any, has been rolled back.
Interesting problem. By default, triggers are designed that if they fail, they rollback the command that fired it. So whenever trigger is executing there is an active transaction, whatever there was an explicit BEGIN TRANSACTION or not on the outside. And also BEGIN/TRY inside trigger will not work. Your best practice would be not to write any code in trigger that could possibly fail - unless it is desired to also fail the firing statement.
In this situation, to suppress this behavior, there are some workarounds.
Option A (the ugly way):
Since transaction is active at the beginning of trigger, you can just COMMIT it and continue with your trigger commands:
CREATE TRIGGER tgTest1 ON Test1 AFTER INSERT
AS
BEGIN
COMMIT;
... do whatever trigger does
END;
Note that if there is an error in trigger code this will still produce the error message, but data in Test1 table are safely inserted.
Option B (also ugly):
You can move your code from trigger to stored procedure. Then call that stored procedure from Wrapper SP that implements BEGIN/TRY and at the end - call Wrapper SP from trigger. This might be a bit tricky to move data from INSERTED table around if needed in the logic (which is in SP now) - probably using some temp tables.
SQLFiddle DEMO
You cannot, and any attempt to solve it is snake oil. No amount of TRY/CATCH or ##ERROR check will work around the fundamental issue.
If you want to use the tightly coupling of a trigger then you must buy into the lower availability induced by the coupling.
If you want to preserve the availability (ie. have the INSERT succeed) then you must give up coupling (remove the trigger). You must do all the processing you were planning to do in the trigger in a separate transaction that starts after your INSERT committed. A SQL Agent job that polls the table for newly inserted rows, an Service Broker launched procedure or even an application layer step are all going to fit the bill.
The accepted answer's option A gave me the following error: "The transaction ended in the trigger. The batch has been aborted.". I circumvented the problem by using the SQL below.
CREATE TRIGGER tgTest1 ON Test1 AFTER INSERT
AS
BEGIN
SET XACT_ABORT OFF
BEGIN TRY
SELECT [Column1] INTO #TableInserted FROM [inserted]
EXECUTE sp_executesql N'INSERT INTO [Table]([Column1]) SELECT [Column1] FROM #TableInserted'
END TRY
BEGIN CATCH
END CATCH
SET XACT_ABORT ON
END
I have tons of insert statements.
I want to ignore errors during the execution of these lines, and I prefer not to wrap each line seperately.
Example:
try
insert 1
insert 2
insert 3
exception
...
I want that if an exception was thrown in insert 1, it will ignore it and go back to perform insert 2, and etc.
How can I do it?
I'm looking for something like "Resume next" in VB.
If you can move all the inserts to a sql script and then run them in sql*plus, then every insert will run by its own and the script will continue to run.
If you are using plsqldeveloper (you taged it), then open a new command window (wich is exactly like a sql script run by sql*plus) and put your staements like this:
insert into table your_table values(1,'aa');
insert into table your_table values(2/0,'bb');
insert into table your_table values(3,'cc');
commit;
Even though statement (2) will throw an execption, since it's not in a block it will continue to the next command.
UPDATE: According to #CheranShunmugavel comment, add
WHENEVER SQLERROR CONTINUE NONE
at the top of the script (especially if your using sql*plus which there the difault is exit).
You'd need to wrap each INSERT statement with its own exception handler. If you have "tons" of insert statements where any of the statements can fail, however, I would tend to suspect that you're approaching the problem incorrectly. Where are these statements coming from? Could you pull the data directly from that source system? Could you execute the statements in a loop rather than listing each one? Could you load the data first into a set of staging tables that will ensure that all the INSERT statements succeed (i.e. no constraints, all columns defined as VARCHAR2(4000), etc.) and then write a single SQL statement that moves the data into the actual destination table with appropriate validations and exception handling?
Use the log error clause. More information in Avoiding Bulk INSERT Failures with DML Error Logging
I am working on pymssql, a python MSSQL driver. I have encountered an interesting situation that I can't seem to find documentation for. It seems that when a CREATE TABLE statement fails, the transaction it was run in is implicitly rolled back:
-- shows 0
select ##TRANCOUNT
BEGIN TRAN
-- will cause an error
INSERT INTO foobar values ('baz')
-- shows 1 as expected
select ##TRANCOUNT
-- will cause an error
CREATE TABLE badschema.t1 (
test1 CHAR(5) NOT NULL
)
-- shows 0, this is not expected
select ##TRANCOUNT
I would like to understand why this is happening and know if there are docs that describe the situation. I am going to code around this behavior in the driver, but I want to make sure that I do so for any other error types that implicitly rollback a transaction.
NOTE
I am not concerned here with typical transactional behavior. I specifically want to know why an implicit rollback is given in the case of the failed CREATE statement but not with the INSERT statement.
Here is the definitive guide to error handling in Sql Server:
http://www.sommarskog.se/error-handling-I.html
It's long, but in a good way, and it was written for Sql Server 2000 but most of it is still accurate. The part you're looking for is here:
http://www.sommarskog.se/error-handling-I.html#whathappens
In your case, the article says that Sql Server is performing a Batch Abortion, and that it will take this measure in the following situations:
Most conversion errors, for instance conversion of non-numeric string to a numeric value.
Superfluous parameter to a parameterless stored procedure.
Exceeding the maximum nesting-level of stored procedures, triggers and functions.
Being selected as a deadlock victim.
Mismatch in number of columns in INSERT-EXEC.
Running out of space for data file or transaction log.
There's a bit more to it than this, so make sure to read the entire section.
It is often, but not always, the point of a transaction to rollback the entire thing if any part of it fails:
http://www.firstsql.com/tutor5.htm
One of the most common reasons to use transactions is when you need the action to be atomic:
An atomic operation in computer
science refers to a set of operations
that can be combined so that they
appear to the rest of the system to be
a single operation with only two
possible outcomes: success or failure.
en.wikipedia.org/wiki/Atomic_(computer_science)
It's probably not documented, because, if I understand your example correctly, it is assumed you intended that functionality by beginning a transaction with BEGIN TRAN
If you run as one batch (which I did first time), the transaction stays open because the INSERT aborts the batch and CREATE TABLE is not run. Only if you run line-by-line does the transaction get rolled back
You can also generate an implicit rollback for the INSERT by setting SET XACT_ABORT ON.
My guess (just had a light bulb moment as I typed the sentence above) is that CREATE TABLE uses SET XACT_ABORT ON internalls = implicit rollback in practice
Some more stuff from me on SO about SET XACT_ABORT (we use it in all our code because it releases locks and rolls back TXNs on client CommandTimeout)