Restart a scenario from the task it fails automatically - sql

I am using ODI , and I have a single mapping and a scenario for that , how can I restart the scenario from the same point after it fails due to some issue automatically.
For ex if I have around 100 rows and 90 gets inserted and after that server shuts down or some reason it stops how can I restart the scenario from the same point without having to start from the starting of the scenario and insert remaining 10 rows.

You can refer this to restart the session from where it failed.
ODI Session Restart - Scenarios & Load Plans

I donĀ“t know if there is a way to do what you asked but, you can try to make a procedure using Fetch, create a variable that gets the number of rows that were already inserted and then when is restarts u can continue from beyond that point.
SELECT val
FROM rownum_order_test
ORDER BY val
OFFSET variable_rows ROWS FETCH NEXT 4 ROWS ONLY;
or create a status for each insert, after the insert change the status 1 to 2, and then when you restart only insert the ones with status 1

Related

Unable to select empty or 0 row table in Oracle 12c using Toad

Oracle version 12.1.0.2.0.
Toad version 12.1.0.22
I have a Table_A with 2 columns (A NUMBER, B NUMBER). Table_A is a new object created for a datafix. No application live objects are referring this.
Due to logic issue, an insert statement initiated a transaction to insert billions of record into the table_A. I found it mid way, killed the Oracle session with help of DBAs. Now the session is killed and I do not see them using session browser in Toad. It took almost 4 hours for the killed session to be removed from session browser. The killed session was available in session browser for the 4 hours with Status as Killed. I believe it must be rolling back the data.
Current Problem:
If I select (without hint) Table_A from my Oracle user account I'm either getting below ora error or the select runs forever (it kept running for more than 30 minutes, so I stopped the execution)
ORA-02395: exceeded call limit on IO usage
If I select with hint, it returns 0 rows.
Select /*+parallel(4)*/ *
from table_A;
Question:
Whether Killed session have any issues and running some IO in background ? I have no clue why the select statement (without hint) is running longer to return 0 rows. As this happened in prod system, I'm worried if any background process would cause any issue in coming days.
Apologies, I do not have DBA privileges to check locks or background running processed. If I missed to provide any additional info, please let me know. Thank you in advance for your time in responding back.
AS this happened in Production system, I do not have access to much of oracle v$ or metadata tables. I tried using session browser to find locks, background process but none helped
select *
from table_A;
I expect it to return 0 rows without a delay.

Oracle DB read next unlocked row

I have following query running at three different instance of my application all trying to read row from a table one by one as a queue.
SELECT * FROM order_request_status ors WHERE ors.status = 'new' and ROWNUM <= 1 order by ors.id asc for update skip locked
now the issue is if the row is locked by one application instance i want my second application instance to read the next unlocked row by the query. but it is not working by this - for update skip locked.
Please suggest how i can implement a queue like feature using oracle db.
The SKIP LOCKED processing happens after rows are returned by the query. Your predicate stops after reading just one record due to ROWNUM<=1, so if the record it finds has already been locked by another session, your query skips it and stops finding more records.
If you only want to process one record at a time, just fetch one record from the cursor instead of using ROWNUM to limit it.

1 database connection for 1 unit of work?

Let me start with an example of my "one unit of work":
I need to insert a transaction into the database. Along with that transaction, a couple unique codes are stored into a separate table, the foreign key being the transaction id. I have 4 database interactions: 1) get the next transaction id from a sequence; 2) insert the transaction; 3 & 4) insert 2 unique codes using the transaction id.
This is a multi-threaded Java application. Which route is best?
each database interaction should get its own connection from the pool, commit and close it immediately after each step
a single connection should be retrieved and used for each of the 4 steps, then commit once at the end and close the connection
I worked on a similar issue some time back using 'Akka' framework 'actors 'to help me with the multi threading part.
We settled with your second approach. Using single connection to do the work.
1) get the next transaction id from a sequence; 2) insert the transaction; 3 & 4) insert 2 unique codes using the transaction id.
Lets make this work atomic. If there is a connection failure in any of the four steps roll back that complete work.
Maintain an 'id' that will make sure this piece of work is started again. Id being updated only when all transactions in work complete.
If transactions are of different types then you can have different threads operating on them(each transaction type corresponds to separate work) with new db connections.

How can I update multiple items from a select query in Postgres?

I'm using node.js, node-postgres and Postgres to put together a script to process quite a lot of data from a table. I'm using the cluster module as well, so I'm not stuck with a single thread.
I don't want one of the child processes in the cluster duplicating the processing of another. How can I update the rows I just received from a select query without the possibility of another process or query having also selected the same rows?
I'm assuming my SQL query will look something like:
BEGIN;
SELECT * FROM mytable WHERE ... LIMIT 100;
UPDATE mytable SET status = 'processing' WHERE ...;
COMMIT;
Apologies for my poor knowledge of Postgres and SQL, I've used it once before in a simple PHP web app and never before with node.js.
If you're using multithreaded application you cannot and should not be using "for Update" (in the main thread anyway) what you need to be using is advisory lock. Each thread can query a row or mnany rows, verifying that they're not locked, and then locking them so no other session uses them. It's as simple as this within each thread:
select * from mytab
where pg_try_advisory_lock(mytab.id)
limit 100
at the end be sure to release the locks using pg_advisory_unlock
BEGIN;
UPDATE mytable SET status = 'processing' WHERE status <> "processing" and id in
( selecy ID FROM mytable where status <> "processing" limit 100) returning * ;
COMMIT;
There's a chance that's going to fail if some other query was working on the same rows
so if you get an error, retry it until you get some data or no rows returned.
if you get zero rows either you're finished or there;s too many other simultaneous proceses like yours.

Voluntary transaction priority in Oracle

I'm going to make up some sql here. What I want is something like the following:
select ... for update priority 2; // Session 2
So when I run in another session
select ... for update priority 1; // Session 1
It immediately returns, and throws an error in session 2 (and hence does a rollback), and locks the row in session 1.
Then, whilst session 1 holds the lock, running the following in session 2.
select ... for update priority 2; // Session 2
Will wait until session 1 releases the lock.
How could I implement such a scheme, as the priority x is just something I've made up. I only need something that can do two priority levels.
Also, I'm happy to hide all my logic in PL/SQL procedures, I don't need this to work for generic SQL statements.
I'm using Oracle 10g if that makes any difference.
I'm not aware of a way to interrupt an atomic process in Oracle like you're suggesting. I think the only thing you could do would be to programmaticaly break down your larger processes into smaller ones and poll some type of sentinel table. So instead of doing a single update for 1 million rows perhaps you could write a proc that would update 1k, check a jobs table (or something similar) to see if there's a higher priority process running, and if a higher priority process is running, to pause its own execution through a wait loop. This is the only thing I can think that would keep your session alive during this process.
If you truly want to abort the progress of your currently running, lower priority thread and losing your session is acceptable, then I would suggest a jobs table again that registered the SQL that was being run and the session ID that it is run on. If you run a higher priority statement it should again check the jobs table and then issue a kill command to the low priority session (http://www.oracle-base.com/articles/misc/KillingOracleSessions.php) along with inserting a record into the jobs table to note the fact that it was killed. When a higher-priority process finishes it could check the jobs table to see if it was responsible for killing anything and if so, reissue it.
That's what resource manager was implemented for.