I am trying to use a simple "Drop Table" Statement in PL/SQL Developer on some tables, but they won't drop. The tables are quite large, but I've never had this problem before. After over half an hour, this error is returned: ORA-04021: timeout occurred while waiting to lock object. Things that I have tried:
Restarting PL/SQL Developer
Deleting other tables (My schema is not full and have previously had a lot less space)
Please try:
to find sid: select * from v$locked_object;
to find serial#: select * from v$session where sid= "sid_from_the_previous_query";
to kill session: alter system kill session 'sid, serial#';
Or if you are just one user just try commit or rollback.
Related
My Teradata version is 15.00.
I got Teradata error 7423: [HY000] Object already locked and NOWAIT. Transaction Aborted, after identify that a table is locked.
-- use this command to test if a table is locked for update/insert/delete.
Lock Table DB1.TABLE1 write nowait
Select * from DB1.TABLE1;
I have tried a few things but I cannot get the locked table out of its misery.
I tried to release the DB archive lock, which is usually the root cause of a blocking issue. But my own session was blocked by an invisible hand/ghost even ViewPoint cannot detect it.
I aborted 2 table update attempts, which I thought was the blocking issue, but did not help
I cannot perform a table update statement due to this table locking issue.
UPDATE DB1.TABLE1
SET UpdatedDate = CURRENT_TIMESTAMP
,LastRunDate = CURRENT_TIMESTAMP
,Status = 'P'
WHERE PackageID = 100001;
I can still select data from this table:
Select * from DB1.TABLE1;
Thanks for any tips/suggestions. Really appreciate it.
I used this SQL and it released the lock on a database:
release lock db_name, override;
It is interesting that the release archive log did not work:
LOGGING ONLINE ARCHIVE OFF FOR DB_NAME;
I am trying to figure out how to lock an entire table from writing in Postgres however it doesn't seem to be working so I am assuming I am doing something wrong.
Table name is 'users' for example.
LOCK TABLE users IN EXCLUSIVE MODE;
When I check the view pg_locks it doesn't seem to be in there. I've tried other locking modes as well to no avail.
Other transactions are also capable of performing the LOCK function and do not block like I assumed they would.
In the psql tool (8.1) I simply get back LOCK TABLE.
Any help would be wonderful.
There is no LOCK TABLE in the SQL standard, which instead uses SET TRANSACTION to specify concurrency levels on transactions. You should be able to use LOCK in transactions like this one
BEGIN WORK;
LOCK TABLE table_name IN ACCESS EXCLUSIVE MODE;
SELECT * FROM table_name WHERE id=10;
Update table_name SET field1=test WHERE id=10;
COMMIT WORK;
I actually tested this on my db.
Bear in mind that "lock table" only lasts until the end of a transaction. So it is ineffective unless you have already issued a "begin" in psql.
(in 9.0 this gives an error: "LOCK TABLE can only be used in transaction blocks". 8.1 is very old)
The lock is only active until the end of the current transaction and released when the transaction is committed (or rolled back).
Therefore, you have to embed the statement into a BEGIN and COMMIT/ROLLBACK block. After executing:
BEGIN;
LOCK TABLE users IN EXCLUSIVE MODE;
you could run the following query to see which locks are active on the users table at the moment:
SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid WHERE relation = 'users'::regclass::oid;
The query should show the exclusive lock on the users table. After you perform a COMMIT and you re-run the above-mentioned query, the lock should not longer be present.
In addition, you could use a lock tracing tool like https://github.com/jnidzwetzki/pg-lock-tracer/ to get real-time insights into the locking activity of the PostgreSQL server. Using such lock tracing tools, you can see which locks are taken and released in real-time.
If I execute a simple select statement in pl/sql developer against a database table, I get a standard set of results back as I would expect.
Recently, I pasted a query from a stored procedure that happened to select from a view, and noticed that a transaction was seemingly left open. This was appraent by the rollback and commit options were available in PL/SQL developer.
A poll of other developers revealed that this seems to affect some but not others, which lead me to suspect PL/SQL Developer settings.
Why on earth would this be the case? The view itelf has a DBLink to another database, but I wouldn't expect this to have any effect.
Any thoughts?
Contrary to your expectation, it looks like the database link is the source of the open transaction. I've noticed behaviour like this before when running SELECT queries on remote tables in PL/SQL Developer.
To quote Tom Kyte (source):
distributed stuff starts a transaction "just in case".
EDIT: 'Any SQL statement starts a transaction in Oracle'? No, it does not, and here's a demonstration of it. This demonstration uses the data dictionary view V$TRANSACTION, which lists the active transactions. This is all running on my local Oracle XE database, which has no users other than me connected to it.
We'll use the following table during this demonstration. It contains only a single column:
SQL> desc test;
Name Null? Type
----------------------------------------- -------- ----------------------------
A NUMBER(38)
SQL> select count(*) from v$transaction;
COUNT(1)
----------
0
No active transactions at the moment. Let's run a SQL query against this table:
SQL> select * from test;
A
----------
2
SQL> select count(*) from v$transaction;
COUNT(1)
----------
0
Still no active transactions. Now let's do something that will start a transaction:
SQL> insert into test values (1);
1 row created.
SQL> select count(*) from v$transaction;
COUNT(1)
----------
1
As expected, we now have an active transaction.
SQL> commit;
Commit complete.
SQL> select count(*) from v$transaction;
COUNT(1)
----------
0
After committing the transaction, it's no longer active.
Now, let's create a database link. I'm using Oracle XE, and the following creates a database link from my Oracle XE instance back to itself:
SQL> create database link loopback_xe connect to user identified by password using 'XE';
Database link created.
Now let's see what happens when we select from the table over the database link:
SQL> select count(*) from v$transaction;
COUNT(1)
----------
0
SQL> select * from test#loopback_xe;
A
----------
2
1
SQL> select count(*) from v$transaction;
COUNT(1)
----------
1
As you can see, simply selecting from a remote table opens a transaction.
I'm not sure exactly what there is to commit or rollback here, but I have to admit to not knowing the ins and outs of distributed transactions, within which the answer probably lies.
Any SQL Statement starts a transaction in Oracle.
From the manual:
A transaction begins with the first executable SQL statement. A transaction ends when it is committed or rolled back, either explicitly with a COMMIT or ROLLBACK statement or implicitly when a DDL statement is issued. [...] An executable SQL statement is a SQL statement that generates calls to an instance, including DML and DDL statements
Most probably those who are not seing this are running in auto-commit mode where the transaction started by a statement is immediately committed after the statement has finished.
Others have claimed that a SELECT is not DML, but again the manual clearly states:
Data manipulation language (DML) statements query or manipulate data in existing schema objects. They enable you to:
* Retrieve or fetch data from one or more tables or views (SELECT)
* Add new rows of data into a table or view (INSERT)
[...]
You absolutely cannot open a transaction strictly with a normal query. You may open one across a database link. The guy who posted a link to the doctors either deliberately or utterly carelessly left out the 2nd sentence.
"A transaction in Oracle Database begins when the first executable SQL
statement is encountered. An executable SQL statement is a SQL
statement that generates calls to an instance, including DML and DDL
statements."
SELECT is neither a DML nor a DDL. It is also TRIVIAL to actually test this. I don't want to come off like a troll here, but its really annoying when people just throw out answers on a forum to try to get points and the answers are complete garbage.
Read the rest of the doc and TEST IT FIRST.
login to a session
run a select
see if you have an open transaction by joining v$Session (for your session) to v$transaction.
If a record comes back, you have a transaction. If not, you don't.
Note, according to the Oracle 11g Admin Guide, if you do a plain old SELECT across a database link you will start a transaction, which needs to be committed (or rolled back).
Select is a part of DML only but lock is not acquired, A row lock is fetched upon insert/update/delete/ select for update. -Ross is right.
https://docs.oracle.com/cd/E11882_01/server.112/e41084/ap_locks001.htm#SQLRF55502
ROW Lock Table lock
SELECT ... FROM table... ----- None
INSERT INTO table ... Yes SX
Same with update delete and select for update.
TLDR : On select from remote database you also create session and connection for remote DB. That session and connection persists as long as local user session. As you can guess this can lead to some problems with keeping up with session and connections.
SO ALWAYS DO A COMMIT :
SELECT * FROM emp#sales;
COMMIT;
I like a long read section :
This was bugging me also so much why there is an transaction on selects from db_links
and decided to finally end this so from oracle documentation :
Oracle® Database Administrator's Guide
11g Release 2 (11.2)
https://docs.oracle.com/html/E25494_01/ds_appdev002.htm
Controlling Connections Established by Database Links
When a global object name is referenced in a SQL statement or remote procedure call, database links establish a connection to a session in the remote database on behalf of the local user. The remote connection and session are only created if the connection has not already been established previously for the local user session.
The connections and sessions established to remote databases persist for the duration of the local user's session, unless the application or user explicitly terminates them. Note that when you issue a SELECT statement across a database link, a transaction lock is placed on the undo segments. To rerelease the segment, you must issue a COMMIT or ROLLBACK statement.
Terminating remote connections established using database links is useful for disconnecting high cost connections that are no longer required by the application. You can terminate a remote connection and session using the ALTER SESSION statement with the CLOSE DATABASE LINK clause. For example, assume you issue the following transactions:
SELECT * FROM emp#sales;
COMMIT;
The following statement terminates the session in the remote database pointed to by the sales database link:
ALTER SESSION CLOSE DATABASE LINK sales;
To close a database link connection in your user session, you must have the ALTER SESSION system privilege.
Note:
Before closing a database link, first close all cursors that use the link and then end your current transaction if it uses the link.
See Also:
Oracle Database SQL Language Reference for more information about the ALTER SESSION statement
I have a simple query like this
SELECT * FROM MY_TABLE;
When I run it, SQL Server Management Studio hangs.
Other tables and views are working fine.
What can cause this? I've had locks while running UPDATE statements before, and I know how to approach those. But what could cause a SELECT to lock?
I have run the "All Blocking Transactions" report, and it says there are none.
It is probably not the select that is locking up, but some other process that is editing (udpate/delete/insert) the table that is causing the locks.
You can view which process is blocking by runing exec sp_who2 on your SQL Server.
Alternatively, if you are OK with dirty reads, you can do one of two things
SELECT * FROM Table WITH (NOLOCK)
OR
SET Transaction Isolation Level Read Uncommitted
SELECT * FROM Table
If there's a lot of other activity going on, something else might be causing locks, and your SELECT might be the deadlock victim. if you run the following
SELECT * FROM my_table WITH(nolock)
you're telling the database that you're ok to read dirty (uncomitted) data, and that locks caused by other activity can be safely ignored.
Also, if a query like that causes management studio to hang, your table might use some optimization
Use this:
SELECT * FROM MY_TABLE with (NOLOCK)
Two possibilities:
Its a really massive table, and you're trying to return 500m rows.
Some other process has a lock on the table, preventing your select from going through until that lock is released.
MY_TABLE could be also locked up by some uncommitted transaction -- i.e. script/stored procedure running (or failed while running) in another MSMM window.
When running a stored procedure (from a .NET application) that does an INSERT and an UPDATE, I sometimes (but not that often, really) and randomly get this error:
ERROR [40001] [DataDirect][ODBC Sybase Wire Protocol driver][SQL Server]Your server command (family id #0, process id #46) encountered a deadlock situation. Please re-run your command.
How can I fix this?
Thanks.
Your best bet for solving you deadlocking issue is to set "print deadlock information" to on using
sp_configure "print deadlock information", 1
Everytime there is a deadlock this will print information about what processes were involved and what sql they were running at the time of the dead lock.
If your tables are using allpages locking. It can reduce deadlocks to switch to datarows or datapages locking. If you do this make sure to gather new stats on the tables and recreate indexes, views, stored procedures and triggers that access the tables that are changed. If you don't you will either get errors or not see the full benefits of the change depending on which ones are not recreated.
I have a set of long term apps which occasionally over lap table access and sybase will throw this error. If you check the sybase server log it will give you the complete info on why it happened. Like: The sql that was involved the two processes trying to get a lock. Usually one trying to read and the other doing something like a delete. In my case the apps are running in separate JVMs, so can't sychronize just have to clean up periodically.
Assuming that your tables are properly indexed (and that you are actually using those indexes - always worth checking via the query plan) you could try breaking the component parts of the SP down and wrapping them in separate transactions so that each unit of work is completed before the next one starts.
begin transaction
update mytable1
set mycolumn = "test"
where ID=1
commit transaction
go
begin transaction
insert into mytable2 (mycolumn) select mycolumn from mytable1 where ID = 1
commit transaction
go