A distributed transaction is waiting for lock - sql

I am trying to copy all the values from a column OLD_COL into another column NEW_COL inside the same table.
To achieve the result I want, I wrote down the following UPDATE in Oracle:
UPDATE MY_TABLE
SET NEW_COL = OLD_COL
WHERE NEW_COL IS NULL;
where MY_TABLE is a big table composed of 400.000 rows.
When I try to run it, it fails with the error:
QL Error: ORA-02049: timeout: distributed transaction waiting for lock
02049. 00000 - "timeout: distributed transaction waiting for lock"
*Cause: exceeded INIT.ORA distributed_lock_timeout seconds waiting for lock.
*Action: treat as a deadlock
I tried so to run the following query for updating one row alone:
UPDATE MY_TABLE
SET NEW_COL = OLD_COL
WHERE ID = '1'
and this works as intended.
Therefore, why can't I update all the rows in my table? Why is this error showing up?

Because there are too many row in your Table, When you UPDATE table will be lock.
oracle default it set to 60 seconds. if your excute time over 60 seconds will be error.
You can try to set up timeout value
ALTER SYSTEM SET distributed_lock_timeout=120;
or disable it.
ALTER SYSTEM DISABLE DISTRIBUTED RECOVERY;
https://docs.oracle.com/cd/A84870_01/doc/server.816/a76960/ds_txnma.htm
Note:
Remember : While running any ALTER SYSTEM Command you need to restart the instance.

Related

Updating first 7 characters of string with another 3 characters using SQL, throws "Error 19 - UNIQUE constraint failed: MGOFile.File."

I have a rather simple DB with a column called File, and I need to remove the first 7 characters of each row, and replace with a new string. I thought I had the code sorted, but I am getting error "SQLite3 Error 19 - UNIQUE constraint failed: MGOFile.File."
My table name is MGOFile, and the column is File. This is a simple select statement on the first few rows, the left column is the raw data, the right is what I need the resultant rows to look like...
I query my table using this:
'''sql
SELECT
File,
'T:\'|| substr(File, 8,2000) as File
FROM
MGOFile
WHERE
file like 'M:\_TV%';
'''
I then tried updating using this:
UPDATE MGOFile
SET File = 'T:\' || substr(File, 8, 2000)
WHERE File like 'M:\_TV%';
But here is where my error comes in, this fails with an error:
I am sure I am doing something simple wrong, but I have done plenty of Googling but all responses are over my head, this is the most advanced SQL I have tried to do!
Any ideas on how to can update these strings with some simple SQLite?
As checking for duplicates doesn't appear to detect the issues. Perhaps getting values at the time of the issue may assist. Do you have Triggers by any-chance? These will sometimes propagate an error which will be reported as being with the table that triggered the trigger.
As such perhaps consider adding a table to log such data along with a BEFORE UPDATE TRIGGER to actually log the information at run time. To stop the data being rolled back and thus undoing the logged information OR FAIL needs to be used.
Important as the updates will not be rolled back updates will have been applied. It is suggested that the above is used on a test database.
-- The code
DROP TABLE IF EXISTS lastupdated;
-- Create the logging table
CREATE TABLE IF NOT EXISTS lastupdated (counter, lastfile_before, lastfile_after, id_of_the_row);
-- Initialise it so it's plain to see if nothing has been done
INSERT INTO lastupdated VALUES(0,'nothing','nothing',0);
-- Add the Trigger to record the debugging information BEFORE the update
CREATE TRIGGER IF NOT EXISTS monitorupdateprogress
BEFORE UPDATE ON MGOFile
BEGIN
UPDATE lastupdated SET counter = counter +1, lastfile_before = old.File, lastfile_after = new.File, id_of_the_row = old.rowid;
END
;
UPDATE OR FAIL MGOFile -- OR FAIL will halt but NOT ROLLBACK
SET File = 'T:\' || substr(File, 8, 2000)
WHERE File like 'M:\_TV%';
SELECT * FROM lastupdated; -- will not run if there is a fail but should be run after the fail
This would, assuming the fail, record
the nth update in the counter column
the value in the File column before the change in the lastfile_before column.
the value that the File column would be updated to in the **lastfile_after* columns.
the last rowid (failing) of the row in the MGOFile table (this does assume that the MGOFile table is not a table defined using WITHOUT ROWID).
If the table was defined with the WITHOUT ROWID then you could change , id_of_the_row = 0;. The value will then be meaningless.
Testing/Results the version of the above that was used to test the above is :-
-- Solely for testing the code below
DROP TABLE IF EXISTS MGOFile;
CREATE TABLE IF NOT EXISTS MGOFile (File TEXT PRIMARY KEY);
-- Some testing data
INSERT INTO MGOFile VALUES
('M:\_TV/9-1-1.so2e09.web.x264-tbs[eztv].mkv'),
('M:\_TV/9-1-1.so2e09.web.x265-tbs[eztv].mkv'),
('M:\_TV/9-1-1.so2e09.web.x266-tbs[eztv].mkv'),
('M:\_TV/9-1-1.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv'),
('M:\_TV/9-1-1.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x277-tbs[eztv].mkv'),
('M:\_TV/9-1-1.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x278-tbs[eztv].mkv'),
('M:\_TV/9-1-1.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x279-tbs[eztv].mkv'),
('M:\_TV/9-1-1.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x280-tbs[eztv].mkv')
;
SELECT substr(File,170,8) FROM MGOFile GROUP BY Substr(File,8,170) HAVING count() > 1;
-- The code
DROP TABLE IF EXISTS lastupdated;
-- Create the logging table
CREATE TABLE IF NOT EXISTS lastupdated (counter, lastfile_before, lastfile_after, id_of_the_row);
-- Initialise it so it's plain to see if nothing has been done
INSERT INTO lastupdated VALUES(0,'nothing','nothing',0);
-- Add the Trigger to record the debugging information BEFORE the update
CREATE TRIGGER IF NOT EXISTS monitorupdateprogress
BEFORE UPDATE ON MGOFile
BEGIN
UPDATE lastupdated SET counter = counter +1, lastfile_before = old.File, lastfile_after = new.File, id_of_the_row = old.rowid;
END
;
SELECT * FROM MGOFile;
UPDATE OR FAIL MGOFile -- OR FAIL will halt but NOT ROLLBACK
SET File = 'T:\' || substr(File, 8, 170) -- <<<<<<<<<<<<<<<<<<<< truncate reduced to force UNIQUE constraint
WHERE File like 'M:\_TV%';
SELECT * FROM lastupdated; -- will not run if there is a fail
When the above is run then the message is :-
UPDATE OR FAIL MGOFile -- OR FAIL will halt but NOT ROLLBACK
SET File = 'T:\' || substr(File, 8, 170) -- <<<<<<<<<<<<<<<<<<<< truncate reduced to force UNIQUE constraint
WHERE File like 'M:\_TV%'
> UNIQUE constraint failed: MGOFile.File
> Time: 0.094s
Running SELECT * FROM lastupdated; returns :-
counter
6
lastfile_before =
M:_TV/9-1-1.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x278-tbs[eztv].mkv
lastfile_after
T:\9-1-1.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x266-tbs[eztv].mkv.so2e09.web.x27
id_of_the_row
6
In the above contrived example the issue can easily be determined (albeit that the duplicate search also found the same issue) as the error is on the 6th row and at the row that contains mkv.so2e09.web.x278-tbs[eztv] but was truncated by the update to .mkv.so2e09.web.x27 hence it is a duplicate of the 5th row which has .mkv.so2e09.web.x277-tbs[eztv] but was also truncated to .mkv.so2e09.web.x27.
P.S. Have you tried using just
UPDATE MGOFile
SET File = 'T:\' || substr(File, 8)
WHERE File like 'M:\_TV%';
i.e. removing the truncation.
The error seems quite clear to me. You are changing the file name to a name that is already in the table.
You can identify the duplicates by running:
SELECT f.*
FROM MGOFile f
WHERE EXISTS (SELECT 1
FROM MGOFile f2
WHERE f2.File = 'T:\'|| substr(File, 8,2000)
) AND
f.file LIKE 'M:\_TV%';
I don't know what you want to do about the duplicate.

SQL Column will not allow entry after being altered

I have a SQL table that has a column "Stamp" the was originally setup as nchar(10). The data that was entered in this field was only 9 characters long (ie. XX111.jpg) However, I have changed the format of the data being entered. It is now XX-XXX111.jpg. I ran this alter statement to increase the column size:
Alter Table tblData
Alter Column Stamp nvarchar(50)
Afterwards I would run an update statement to update the NULL values in the database:
Update tblData Set Stamp = 'XX-XXX111.jpg' where Updated > '2014-08-01' and Stamp is null
When I do this I get the following error:
(22 row(s) affected)
Msg 8152, Level 16, State 13, Procedure ChangedMECTrigger, Line 31
String or binary data would be truncated.
The statement has been terminated.
I don't understand how this is not working. Where am I going wrong?
You apparently have a trigger on the table:
ChangedMECTrigger
You need to update the data length on this too.
You can find those in SSMS here:
there's a trigger tied to tblData called ChangedMECTrigger. Something in the trigger's logic is causing the error. You could temporarily disable the trigger prior to running your update tblData... statement like this:
disable trigger ChangedMECTrigger on tblData
Update tblData Set Stamp = 'XX-XXX111.jpg' where Updated > '2014-08-01' and Stamp is null
enable trigger ChangedMECTrigger on tblData
or you could look at the trigger's code to find the issue. Chances are there's something in the trigger using nchar(10) still and needs to be updated to nvarchar(50).

Encountering deadlock while deleting and running update statistics

I am running a stored procedure that deletes data from a table, the procedure looks like:
SET rowcount 10000
WHILE ( #rows_deleted > 0 )
BEGIN
BEGIN TRAN
DELETE TABLE1 WHERE status = '1'
SELECT #rows_deleted = ##rowcount
COMMIT TRAN
END
While this procedure is running, update statistics is also running on the same table.
The table's lock scheme is all pages.
I am wondering if the locking is all pages how can it encounter a deadlock?
There is nothing else running on this table.
I am using Sybase 12.5 ASE
Found out that update statistics indeed was creating the deadlock. It was taking a shared lock on the table. As the table has locking scheme of all pages, the delete would have to wait for it to complete. But instead of blocking delete query for long time, Sybase chooses to terminate it as the victim.

How do I use a database to manage a semaphore?

If several instances of the same code are running on different servers, I would like to use a database to make sure a process doesn't start on one server if it's already running on another server.
I could probably come-up with some workable SQL commands that used Oracle transaction processing, latches, or whatever, but I'd rather find something that's been tried and true.
Years ago a developer that was a SQL wiz had a single SQL transaction that took the semaphore and returned true if it got it, and returned false if it didn't get it. Then at the end of my processing, I'd need to run another SQL transaction to release the semaphore. It would be cool, but I don't know if it's possible for a database-supported semaphore to have a time-out. That would be a huge bonus to have a timeout!
EDIT:
Here are what might be some workable SQL commands, but no timeout except through a cron job hack:
---------------------------------------------------------------------
--Setup
---------------------------------------------------------------------
CREATE TABLE "JOB_LOCKER" ( "JOB_NAME" VARCHAR2(128 BYTE), "LOCKED" VARCHAR2(1 BYTE), "UPDATE_TIME" TIMESTAMP (6) );
CREATE UNIQUE INDEX "JOB_LOCKER_PK" ON "JOB_LOCKER" ("JOB_NAME") ;
ALTER TABLE "JOB_LOCKER" ADD CONSTRAINT "JOB_LOCKER_PK" PRIMARY KEY ("JOB_NAME");
ALTER TABLE "JOB_LOCKER" MODIFY ("JOB_NAME" NOT NULL ENABLE);
ALTER TABLE "JOB_LOCKER" MODIFY ("LOCKED" NOT NULL ENABLE);
insert into job_locker (job_name, locked) values ('myjob','N');
commit;
---------------------------------------------------------------------
--Execute at the beginning of the job
--AUTOCOMMIT MUST BE OFF!
---------------------------------------------------------------------
select * from job_locker where job_name='myjob' and locked = 'N' for update NOWAIT;
--returns one record if it's ok. Otherwise returns ORA-00054. Any other thread attempting to get the record gets ORA-00054.
update job_locker set locked = 'Y', update_time = sysdate where job_name = 'myjob';
--1 rows updated. Any other thread attempting to get the record gets ORA-00054.
commit;
--Any other thread attempting to get the record with locked = 'N' gets zero results.
--You could have code to pull for that job name and locked = 'Y' and if still zero results, add the record.
---------------------------------------------------------------------
--Execute at the end of the job
---------------------------------------------------------------------
update job_locker set locked = 'N', update_time = sysdate where job_name = 'myjob';
--Any other thread attempting to get the record with locked = 'N' gets no results.
commit;
--One record returned to any other thread attempting to get the record with locked = 'N'.
---------------------------------------------------------------------
--If the above 'end of the job' fails to run (system crash, etc)
--The 'locked' entry would need to be changed from 'Y' to 'N' manually
--You could have a periodic job to look for old timestamps and locked='Y'
--to clear those.
---------------------------------------------------------------------
You should look into DBMS_LOCK. Essentially, it allows for the enqueue locking mechanisms that Oracle uses internally, except that it allows you to define a lock type of 'UL' (user lock). Locks can be held shared or exclusive, and a request to take a lock, or to convert a lock from one mode to another, support a timeout.
I think it will do what you want.
Hope that helps.

The object name 'FacetsXrefStaging.Facets.Facets.FacetsXrefImport' contains more than the maximum number of prefixes. The maximum is 2

Hi i have created a proc which truncates and reseeds the no of records from the tables. but i am getting the error : The object name 'FacetsXrefStaging.Facets.Facets.FacetsXrefImport' contains more than the maximum number of prefixes. The maximum is 2.
Create proc TruncateAndReseedFacetsXrefStagingTables
'
'
Declare variables
'
'
SET #iSeed = ( SELECT CASE WHEN MAX(FacetsXrefId) IS NULL
THEN -2147483648
ELSE MAX(FacetsXrefId) + 1
END
FROM FacetsXref.Facets.Facets.FacetsXrefCertified
)
TRUNCATE TABLE FacetsXrefStaging.Facets.Facets.FacetsXrefImport
DBCC CHECKIDENT ('FacetsXrefStaging.Facets.FacetsXrefImport', RESEED,#iSeed )
TRUNCATE TABLE FacetsXrefStaging.Facets.FacetsXrefImport
Can anybody help me with that.
I AM USING SQL SERVER 2005.
I am actually having this problem that the OP had - and there's no typo involved in my situation. :-)
This is a table that exists on a different server from the server I'm on. The servers are linked.
The queries above and below the TRUNCATE statement work just fine.
The TRUNCATE does not work.
...Anonymized to protect the innocent...
select count(*) as mc from servername.databasename.dbo.tablename -- works
truncate TABLE [servername].[databasename].[dbo].[tablename] -- error
select count(*) as mc from servername.databasename.dbo.tablename -- works
Error message:
The object name 'servername.databasename.dbo.'
contains more than the maximum number of prefixes. The maximum is 2.
Yes - the TRUNCATE is commented but I noticed that after I did all the blur effects and wasn't going to go back and re-make the image - sorry :-( - ignore the begin/end tran and the comment markers - the TRUNCATE does not work - see error above.