I am migrating from Oracle to postgreSQL and I have a question.
I have used to use the query for oracle like this:
SELECT id FROM table_name WHERE id = '123' FOR UPDATE WAIT 30"
As far as I understand in postgreSQL we have only NOWAT options, so I have changed query like this:
SELECT id FROM table_name WHERE id = '123' FOR UPDATE"
Question is, how can I populate some lock timeout? I saw that I could send additional queries, for example
set lock_timeout = 30000; or set lock_timeout = ‘30s’;
select for update ...
set lock_timeout = 0;
However in this case I am adding 2 additional queries, but I don't want to. Is there any other way to populate some lock timeout?
There is possibility to configure lock timeout at posgreSQL server configuration.
Change parameter lock_timeout = '30s' in pgsql/11/data/postgresql.conf
After that reload configuration or restart postgreSQL
Related
I am trying to copy all the values from a column OLD_COL into another column NEW_COL inside the same table.
To achieve the result I want, I wrote down the following UPDATE in Oracle:
UPDATE MY_TABLE
SET NEW_COL = OLD_COL
WHERE NEW_COL IS NULL;
where MY_TABLE is a big table composed of 400.000 rows.
When I try to run it, it fails with the error:
QL Error: ORA-02049: timeout: distributed transaction waiting for lock
02049. 00000 - "timeout: distributed transaction waiting for lock"
*Cause: exceeded INIT.ORA distributed_lock_timeout seconds waiting for lock.
*Action: treat as a deadlock
I tried so to run the following query for updating one row alone:
UPDATE MY_TABLE
SET NEW_COL = OLD_COL
WHERE ID = '1'
and this works as intended.
Therefore, why can't I update all the rows in my table? Why is this error showing up?
Because there are too many row in your Table, When you UPDATE table will be lock.
oracle default it set to 60 seconds. if your excute time over 60 seconds will be error.
You can try to set up timeout value
ALTER SYSTEM SET distributed_lock_timeout=120;
or disable it.
ALTER SYSTEM DISABLE DISTRIBUTED RECOVERY;
https://docs.oracle.com/cd/A84870_01/doc/server.816/a76960/ds_txnma.htm
Note:
Remember : While running any ALTER SYSTEM Command you need to restart the instance.
the same line:
DELETE(SELECT * FROM tablename WHERE id=12)
on SQL Developer runs normally and when using the occi API takes forever.
I have checked that the query "SELECT * FROM tablename WHERE id=12" matches a non empty sets of rows.
More specifically I use the following syntax:
oracle::occi::Statement *deleteStm = con->createStatement("DELETE(SELECT * FROM tablename WHERE id=12)");
oracle::occi::ResultSet *rs = deleteStm->executeQuery();
I suspect that in your case you've simply got uncommitted transaction. It goes that way:
session1 session2
DELETE ... table/rows is locked
SELECT * FROM ... you will see all data
DELETE ... and now you will wait and wait
until lock is released
COMMIT;
SELECT * FROM ... now resultset is empty
I strongly encourage to read Data Concurrency and Consistency
I want to prevent simultaneous update(by multiple sessions) for my record in my stored procedure.
1.I am , using SELECT FOR UPDATE statement for the particular row , which i want to update it.
This will lock the record.
I am updating this record now and then commit it. So the lock is released and now the record is available for another user/session to work on with.
However , when i try to run the procedure , i am finding the simultaneous update is happening , means SELECT FOR UPDATE not working fine.
Pls provide some suggestions.
Sample Code is below :
IF THEN
// do something
ELSIF THEN
BEGIN
SELECT HIGH_NBR INTO P_NBR FROM ROUTE
WHERE LC_CD = <KL_LCD> AND ROUTE_NBR = <KL_ROUTE_NBR>
FOR UPDATE OF HIGH_NBR ;
UPDATE ROUTE SET HIGH_NBR = (HIGH_NBR + 1)
WHERE LC_CD = <KL_LCD> AND ROUTE_NBR = <KL_ROUTE_NBR>;
COMMIT;
END;
END IF;
In multiple user environment , i am observing the SELECT FOR UPDATE lock is not happening.
I just tested the scenario with two different computers (Sessions). Here is what i have did.
From One computer , executed SELECT FOR UPDATE statement -- Locking a row.
From Another computer , execute an UPDATE statement for the same record.
Update did not happen and the Sql execution of update statement is not completed , even after a long time.
When will be the lock released , if we issue an SELECT FOR UPDATE for a record.
first of all you need to set auto commit false before starting the query.
and to check your code is working you can use two Java treads with a cyclic barrier
and also you should add timestamps on you code to check the time of the code being reached.
I am wondering how to rewrite the following SQL Server 2005/2008 script for SQL Server 2000 which didn't have OUTPUT yet.
Basically, I would like to update rows and return the updated rows without creating deadlocks.
Thanks in advance!
UPDATE TABLE
SET Locked = 1
OUTPUT INSERTED.*
WHERE Locked = 0
You can't in SQL Server 2000 cleanly
What you can do is use a transaction and some lock hints to prevent a race condition. Your main problem is 2 processes accessing the same row(s), not a deadlock. See SQL Server Process Queue Race Condition for more, please.
BEGIN TRANSACTION
SELECT * FROM TABLE WITH (ROWLOCK, READPAST, UPDLOCK) WHERE Locked = 0
UPDATE TABLE
SET Locked = 1
WHERE Locked = 0
COMMIT TRANSACTION
I haven't tried this, but you could also try a SELECT in an UPDATE trigger from INSERTED.
I want to execute the following statement through from a linked server (openquery):
UPDATE SAP_PLANT
SET (OWNER, OWNER_COUNTRY) = (SELECT import.AFNAME, import.COUNTRY
FROM SAP_IMPORT_CUSTOMERS import, SAP_PLANT plant
WHERE plant.SAP_FL = import.SAP_NO
AND import.role ='OWNER')
I've tried to form it into the following syntax, without success :(
update openquery(‘my_linked_server, ‘select column_1, column_2 from table_schema.table_name where pk = pk_value’)
set column_1 = ‘my_value1′, column_2 = ‘my_value2′
I hope for you this is no problem?
I guess this is not really a query you want to open, rather an SQL statement you want to execute. So instead of openquery, you shoud use execute. See example G here: http://msdn.microsoft.com/en-us/library/ms188332.aspx
So your script shoul look like
execute ('your sql command here') at my_linked_server
Are you getting syntax error? Your server parameter in the update openquery is missing a trailing quote. Change ```my_linked_servertomy_linked_server'`.