We plan to configure a stored procedure to run as a batch job daily using Oracle DBMS scheduler package. We would like to know what would be the best way to log an error message when there is an error occured. Is logging to a temporary table an option? or is there a better option. Thanks in advance.
If you decide to roll your own logging and log into a table you might go the Autonomous Transaction route.
An Autonomous Transaction is a transaction that can be commited independently of the current transaction you are in.
That way you can log and commit all the info you want to your log table independently of the success or failure of your stored procedure or batch process parent transaction.
CREATE OR REPLACE PROCEDURE "SP_LOG" (
P_MESSAGE_TEXT VARCHAR2
) IS
pragma autonomous_transaction;
BEGIN
DBMS_OUTPUT.PUT_LINE(P_MESSAGE_TEXT);
INSERT INTO PROCESSING_LOG (
MESSAGE_DATE,
MESSAGE_TEXT
) VALUES (
SYSDATE,
P_MESSAGE_TEXT
);
COMMIT;
END;
/
Then if you call it like this, you can still get messages committed to your log table even if you have a failure and roll back your transaction:
BEGIN
SP_LOG('Starting task 1 of 2');
... code for task 1 ...
SP_LOG('Starting task 2 of 2');
... code for task 2 ...
SP_LOG('Ending Tasks');
... determine success or failure of process and commit or rollback ...
ROLLBACK;
END;
/
You may want to tidy it up with exceptions that make sense for your code, but that is the general idea, the data written in the calls to SP_LOG persists, but the parent transaction can still be rolled back.
You could use log4plsql http://log4plsql.sourceforge.net/and change the choice later by configuration changes not code changes
The log4plsql page gives a list of various places it can log.
It also depends how applications and systems are monitored in your environment - if there is a standard way fir example a business I worked add used used irc for monitoring - then you might want a function that calls to that.
You say that you don't have a lot of control over the DB environment to install logging packages - if this is the case then you'll be limited to querying the information in the dba_scheduler_job_run_details and dba_scheduler_job_log system views - you'll be able to see the history of executions here. Unhandled exceptions will show up in the ADDITIONAL_INFO column. If you need notification you can poll these views and generate email.
that depends on how you will deal with errors: if you just need to be notified, the email is the best option; if you need to manually continue process the error, the table is good choice.
Related
As the title says - when I perform an "INSERT" statement, I can't see the results unless I re-open PL/SQL Developer.
To make things a bit more clear:
After I perform this statement on the empty table "worker_temp" -
insert into worker_temp
select * from worker_b
I see that 100 records have been inserted:
But when I try to see the results using this query:
select * from worker_temp;
I still see an empty table:
But only after I quit PL/SQL Developer and re-open it, I can see the records that I inserted earlier:
Is there a way to see the changes without closing and re-opening PL/SQL Developer?
What I've tried so far:
I've tried to refresh the table using right click on it:
And I've also tried to refresh the whole tables folder:
I also tried committing -
commit;
But I'm not sure what that even is.
Tool agnostic way:
begin
insert into worker_temp
select * from worker_b;
commit;
end;
Judging by all the screenshots you are likely getting separate database sessions in 'each' tab you are using - which is a good thing. You have to issue the commit on the same session that performed the insert. Another way of understanding this:
begin
insert into worker_temp select * from worker_b;
DBMS_OUTPUT.PUT_LINE('Rows inserted but not committed ' || SQL%ROWCOUNT);
-- 'undo' the insert by rolling back the insert instead of commit.
rollback;
end;
The default setting in PL/SQL Developer is Multi session:
This means that each editor window you have open is logged into the database in a separate session. A session can't see another session's changes until it commits. This is rather like saving a shared Excel spreadsheet on a network drive. Nobody can see your changes until you have finished making them, which you'll appreciate is an important feature in a multi-user database.
In PL/SQL Developer, the Multi session default setting means that you can start a long-running query in one SQL window, and then get on with something else in another without being blocked and having to wait for it. With this setting, you'll need to commit your changes before any other editor window can see them. There are Commit and Rollback icons in the toolbar, or you can type commit; and execute it.
However, I always set mine to Dual session, meaning all windows are part of the same session, even if it means I sometimes have to wait for something. I find this simplifies things considerably, and also I can make changes across multiple windows without needing to commit, which can be helpful when working with global temporary tables or alter session commands.
Read more in this setup guide.
I have a series data modify operation to do,such as
1. update table_a set value=1 where id=1
2. update table_b set value=2 where id=1
3. update table_c set value=3 where id=1
and I want to ensure this three operation must all complete,I know using transaction can guarantee all performed or none performed.But my point is must make these three all performed.when first sql performed,the app instance may crashed and the other two are missed.
Note this is a ditributed enviroment,may be another app instance can take over the unfinished SQL,but how can I do it?
Can I use a stored procedure,the app instance only fire the stored procedure,and database finsh all the sql?
If when performing transaction,the app instance suddenly crashes,will it leads to a dead lock?
Deadlocks are not crashed requests which fail before the end of the execution. If your request crashes into a transaction, it won't lead to a deadlock.
It is always better to use stored procedures but this won't help you for this specific case.
What I would suggest is inded the use of a transaction with a try catch to rollback the transaction in case of failure.
Something like that :
BEGIN TRY -- start of try
BEGIN TRANSACTION; -- start of transaction
update table_a set value=1 where id=1
update table_b set value=2 where id=1
update table_c set value=3 where id=1
COMMIT TRANSACTION; -- everything went ok we commit
BEGIN CATCH -- an error happened we rollback
PRINT N'Unexpected error';
ROLLBACK TRANSACTION;
END CATCH
You can check more complete examples here
If an app is performing a transaction on a database server, and the app crashes (abruptly disconnects from the database) before committing the transaction, the database server rolls back the transaction. The disconnection does not leave the database in an unusable (potentially deadlocked) state.
So your database contents won't reflect any of your three UPDATE operations when your app crashes during your transaction. It will just lose the transaction in progress.
How to handle this potential failure mode?
Reduce the probability of a crash during a transaction. Try to avoid doing stuff in your app that could make it crash while your transaction is in process. For example, if you get data from some other server or device, get it all before you begin your transaction. This solution is usually good enough for production apps.
Rig up some sort of way for your app, upon restarting, to find out the most recent successful transaction. One good way? Add a column like this to one of your tables: (this is a MySQL thing.)
last_update_timestamp TIMESTAMP DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP,
This causes every UPDATE operation on each column to -- automatically -- put NOW() into the last_update_timestamp column. Then, when your crashed app restarts you can do
SELECT MAX(last_update_timestamp) FROM table
and you'll know when the most recent successful update occurred. This automatic update also gets rolled back if a transaction is rolled back. If you know when the last successful update occurred, your app may be able to redo the one that was rolled back by the crash.
If you choose to build a redo-transaction capability, be sure to build it so you can test it! if (testingAppCrash) crashNow = 1 / 0; might do the trick in your app.
I think I have a misunderstanding about how to use Savepoints. Perhaps someone can clear it up for me. I present my example as what I am trying to do, and what I have experienced.
My app is doing a certain procedure.
Before that procedure (and associated DB operations) I create a savepoint.
During that procedure, I initiate a select for update,
which creates a number of locks:
lock1 - duration=transaction, class=row, type=intent row=big number
lock2 - duration=transaction, class=row, type=WriteNoPK row=big number
Should that java procedure succeed, the associated DB transaction is completed via a commit.
However, if the java procedure fails, I want also to rollback any associated DB operations.
I have been attempting this by:
conn.rollback(mySavepoint);
However, this has not been releasing the table locks created (above) by the DB operations (that I thought I just rolled back by conn.rollback(mySavepoint);)
I have tested this behaviour with two databases: Sybase, and Derby.
Why is this the case?
Do I really need to commit after the conn.rollback(mySavepoint) ???
It just seems a bit counter-intuitive.
I'm pretty new to PL-SQL although I've got lots of db experience with other RDBMS's. Here's my current issue.
procedure CreateWorkUnit
is
update workunit
set workunitstatus = 2 --workunit loaded
where
SYSDATE between START_DATE and END_DATE
and workunitstatus = 1 --workunit created;
--commit here?
call loader; --loads records based on status, will have a commit of its own
update workunit wu
set workunititemcount = (select count(*) from workunititems wui where wui.wuid = wu.wuid)
where workunitstatus = 2
So the behaviour I'm seeing, with or without commit statements is that I have to execute twice. Once to flip the statuses, then the loader will run on the second execution. I'd like it all to run in one go.
I'd appreciate any words of oracle wisdom.
Thanks!
When to commit transactions in a batch procedure? It is a good question, although it only seems vaguely related to the problems with the code you post. But let's answer it anyway.
We need to commit when the PL/SQL procedure has completed a unit of work. A unit of work is a business transaction. This would normally be at the end of the program, the last statement before the EXCEPTION section.
Sometimes not even then. The decision to commit or rollback properly lies with the top of the calling stack. If our PL/SQL is being called from a client (may a user clicking a button on a screen) then perhaps the client should issue the commit.
But it is not unreasonable for a batch process to manage its own commit (and rollback in the case of errors). But the main point is that the only the toppermost procedure should issue COMMIT. If a procedure calls other procedures those called programs should not issue commits or rollbacks. If they should handle any errors (log etc) and re-raise them to the calling program. Let it decode whether to rollback. Because all the called procedures run in the same session and hence the same transaction: a rollback in a called program will revert all the changes in the batch process. That's not right. The same reasoning applies to commits.
You will sometimes read advice on using intermittent commits to break up long running processes into smaller units e.g. every 1000 inserts. This is bad advice for several reasons, not all of them related to transactions. The pertinent ones are:
Issuing a commit frees locks on resources. This is the cause of ORA-1555 Snapshot too old errors.
It also affects read consistency, which only applies at the statement and/or transaction level. This is the cause of ORA-1002 Fetch out of sequence errors.
It affects re-startability. If the program fails having processed 30% of the records, can we be confident it will only process the remaining 70% when we re-run the batch?
Once we commit records other sessions can see those changes: does it make sense for other users to see a partially changed view of the data?
So, the words of "Oracle wisdom" are: always align the database transaction with the business transaction, with a single commit per unit of work.
Somebody mentioned autonmous transactions as a way of issuing commits in sub-processes. This is usually a bad idea. Changes made in an autonomous transaction are visible to other sessions but not to our own. That very rarely makes sense. It also creates the same problems with re-startability which I discussed earlier.
The only acceptable use for automomous transactions is recording activity (error log, trace, audit records). We need that data to persist regardless of what happens in the wider transaction. Any other use of the pragma is almost certainly a workaround for a porr design, which actually just makes the problem worse.
You may not need to commit in pl/sql procedure. the procedures that you call inside another procedure will use same session so you don't need to commit. by the way procedure must completely rollback if it session rollbacked or has an exception.
I mis-classfied my problem. I thought this was a transaction problem and really it was one of my flags not being set as expected.A number field was null when I was expecting 0.
Sorry for that.
Josh Robinson
I have a vendor reporting product executing queries to pull report data, no inserts, no updates just reading data.
We have double our heap size 3 times and are now at 1024 4k pages, The app will run fine for a week then we will begin to see DB2 SQL error: SQLCODE: -954, SQLSTATE: 57011 indicating the transaction log is not able to accomodate the request.
Its not the size of the reports since they run fine after a recycle. I spoke with another DBA on this. He believe the problem was in a difference between ORACLE and DB2 in that the vendor code is crappy and not issuing commits on the selects. This is causing the references to not be cleaned up and is slowly accumulating as garbage in the heap.
I wanted to know if this is accurate as I thought only inserts and updates needed to have commits included. Is there any IBM documentation on this?
We are currently recycling on a weekly basis to alleviate the problem but I would like to have a good handle on the issue before going back to the vendor asking them to alter their code.
Any transaction needs to be properly terminated -- why did you think that only applies to inserts and updates? Consider running transactionally a "select a from b where c > 12" and then "select a from b where c <= 12"; within a transaction the DB has to guarantee that every a gets returned exactly once either from the first or second select, not both (assuming c is never null;-). Without transactionality, some a's might fall between the cracks or be returned twice if their corresponding c was changed by a different transaction, and that's just not ACID!-)
So when you do not need separate SELECT queries to be transactional wrt each other, tell the DB! And the way you tell, is by terminating the transaction after each select (normally commit is what you use for the purpose, though I guess you could, indifferently, choose to use rollback here;-).
Per Alex's response, the first SQL activity after any CONNECT, COMMIT, or ROLLBACK initiates a transaction.
To get a handle on your resource issue (transaction logs full), you should investigate your application that issues the reports - ensure that transactions are being closed out explicitly in code. I've seen cases where application developers rely upon the Garbage Collector to clean up database objects - while those objects are waiting for cleanup, the database resources (transactions) are held open.
It's always good practice to explicitly COMMIT or ROLLBACK your transactions as soon as you are done with the data - regardless of the programming methodology you use.
I get this error when committing transaction on a SELECT query, but despite the error it does return a Result-Set that include queried data.
tran.Commit();
error [hy011] [ibm] cli0126e the operation is invalid sqlstate=hy011
I changed my code to tran.Rollback(); and the error disapered.
Can anyone explain this behavior?