I have following query running at three different instance of my application all trying to read row from a table one by one as a queue.
SELECT * FROM order_request_status ors WHERE ors.status = 'new' and ROWNUM <= 1 order by ors.id asc for update skip locked
now the issue is if the row is locked by one application instance i want my second application instance to read the next unlocked row by the query. but it is not working by this - for update skip locked.
Please suggest how i can implement a queue like feature using oracle db.
The SKIP LOCKED processing happens after rows are returned by the query. Your predicate stops after reading just one record due to ROWNUM<=1, so if the record it finds has already been locked by another session, your query skips it and stops finding more records.
If you only want to process one record at a time, just fetch one record from the cursor instead of using ROWNUM to limit it.
Related
I am using ODI , and I have a single mapping and a scenario for that , how can I restart the scenario from the same point after it fails due to some issue automatically.
For ex if I have around 100 rows and 90 gets inserted and after that server shuts down or some reason it stops how can I restart the scenario from the same point without having to start from the starting of the scenario and insert remaining 10 rows.
You can refer this to restart the session from where it failed.
ODI Session Restart - Scenarios & Load Plans
I donĀ“t know if there is a way to do what you asked but, you can try to make a procedure using Fetch, create a variable that gets the number of rows that were already inserted and then when is restarts u can continue from beyond that point.
SELECT val
FROM rownum_order_test
ORDER BY val
OFFSET variable_rows ROWS FETCH NEXT 4 ROWS ONLY;
or create a status for each insert, after the insert change the status 1 to 2, and then when you restart only insert the ones with status 1
So given this transaction:
select * from table_a where field_a = 'A' for update;
Assuming this gives out multiple rows/results, will the database lock all the results right off the bat? Or will it lock it one row at a time.
If the later is true, does that mean running this query concurrently, can result in a deadlock?
Thus, adding an order by to maintain consistency on the order is needed to solve this problem?
The documentation explains what happens as follows:
FOR UPDATE
FOR UPDATE causes the rows retrieved by the SELECT statement to be
locked as though for update. This prevents them from being locked,
modified or deleted by other transactions until the current
transaction ends. That is, other transactions that attempt UPDATE,
DELETE, SELECT FOR UPDATE, SELECT FOR NO KEY UPDATE, SELECT FOR SHARE
or SELECT FOR KEY SHARE of these rows will be blocked until
the current transaction ends; conversely, SELECT FOR UPDATE will
wait for a concurrent transaction that has run any of those commands
on the same row, and will then lock and return the updated row (or no
row, if the row was deleted). Within a REPEATABLE READ or
SERIALIZABLE transaction, however, an error will be thrown if a row
to be locked has changed since the transaction started. For further
discussion see Section 13.4.
The direct answer to your question is that Postgres cannot lock all the rows "right off the bat"; it has to find them first. Remember, this is row-level locking rather than table-level locking.
The documentation includes this note:
SELECT FOR UPDATE modifies selected rows to mark them locked, and so
will result in disk writes.
I interpret this as saying that Postgres executes the SELECT query and as it finds the rows, it marks them as locked. The lock (for a given row) starts when Postgres identifies the row. It continues until the end of the transaction.
Based on this, I think it is possible for a deadlock situation to arise using SELECT FOR UPDATE.
I'm using node.js, node-postgres and Postgres to put together a script to process quite a lot of data from a table. I'm using the cluster module as well, so I'm not stuck with a single thread.
I don't want one of the child processes in the cluster duplicating the processing of another. How can I update the rows I just received from a select query without the possibility of another process or query having also selected the same rows?
I'm assuming my SQL query will look something like:
BEGIN;
SELECT * FROM mytable WHERE ... LIMIT 100;
UPDATE mytable SET status = 'processing' WHERE ...;
COMMIT;
Apologies for my poor knowledge of Postgres and SQL, I've used it once before in a simple PHP web app and never before with node.js.
If you're using multithreaded application you cannot and should not be using "for Update" (in the main thread anyway) what you need to be using is advisory lock. Each thread can query a row or mnany rows, verifying that they're not locked, and then locking them so no other session uses them. It's as simple as this within each thread:
select * from mytab
where pg_try_advisory_lock(mytab.id)
limit 100
at the end be sure to release the locks using pg_advisory_unlock
BEGIN;
UPDATE mytable SET status = 'processing' WHERE status <> "processing" and id in
( selecy ID FROM mytable where status <> "processing" limit 100) returning * ;
COMMIT;
There's a chance that's going to fail if some other query was working on the same rows
so if you get an error, retry it until you get some data or no rows returned.
if you get zero rows either you're finished or there;s too many other simultaneous proceses like yours.
I have a method to delete records in a DB, the query is created correctly and the records are deleted but after 40 seconds to 1 minute
If i execute the query in the DB prompt the record is deleted immediately
The code i have is only :
getting the database connection
preparing the statement passing 3 variables to the "delete from" sentence
calling executeUpdate on the statement
calling commit on the connection
closing the db connection
What could it be wrong? any clue?
You are implicitly assuming that a DELETE statement is very trivial in all cases, which is not always true. At very least, it needs to find the records it want to remove in the table. This may require an entire table scan if, for example, the WHERE predicate cannot leverage an existing index.
I'm working on a web app connected to oracle. We have a table in oracle with a column "activated". Only one row can have this column set to 1 at any one time. To enforce this, we have been using SERIALIZED isolation level in Java, however we are running into the "cannot serialize transaction" error, and cannot work out why.
We were wondering if an isolation level of READ COMMITTED would do the job. So my question is this:
If we have a transaction which involves the following SQL:
SELECT *
FROM MODEL;
UPDATE MODEL
SET ACTIVATED = 0;
UPDATE MODEL
SET ACTIVATED = 1
WHERE RISK_MODEL_ID = ?;
COMMIT;
Given that it is possible for more than one of these transactions to be executing at the same time, would it be possible for more than one MODEL row to have the activated flag set to 1 ?
Any help would be appreciated.
your solution should work: your first update will lock the whole table. If another transaction is not finished, the update will wait. Your second update will guarantee that only one row will have the value 1 because you are locking the table (it doesn't prevent INSERT statements however).
You should also make sure that the row with the RISK_MODEL_ID exists (or you will have zero row with the value '1' at the end of your transaction).
To prevent concurrent INSERT statements, you would LOCK the table (in EXCLUSIVE MODE).
You could consider using a unique, function based index to let Oracle handle the constraint of only having a one row with activated flag set to 1.
CREATE UNIQUE INDEX MODEL_IX ON MODEL ( DECODE(ACTIVATED, 1, 1, NULL));
This would stop more than one row having the flag set to 1, but does not mean that there is always one row with the flag set to 1.
If what you want is to ensure that only one transaction can run at a time then you can use the FOR UPDATE syntax. As you have a single row which needs locking this is a very efficient approach.
declare
cursor c is
select activated
from model
where activated = 1
for update of activated;
r c%rowtype;
begin
open c;
-- this statement will fail if another transaction is running
fetch c in r;
....
update model
set activated = 0
where current of c;
update model
set activated = 1
where risk_model_id = ?;
close c;
commit;
end;
/
The commit frees the lock.
The default behaviour is to wait until the row is freed. Otherwise we can specify NOWAIT, in which case any other session attempting to update the current active row will fail immediately, or we can add a WAIT option with a polling time. NOWAIT is the option to choose to absolutely avoid the risk of hanging, and it also gives us the chance to inform the user that someone else is updating the table, which they might want to know.
This approach is much more scalable than updating all the rows in the table. Use a function-based index as WW showed to enforce the rule that only one row can have ACTIVATED=1.