When using OCCI deleting runs forever - sql

the same line:
DELETE(SELECT * FROM tablename WHERE id=12)
on SQL Developer runs normally and when using the occi API takes forever.
I have checked that the query "SELECT * FROM tablename WHERE id=12" matches a non empty sets of rows.
More specifically I use the following syntax:
oracle::occi::Statement *deleteStm = con->createStatement("DELETE(SELECT * FROM tablename WHERE id=12)");
oracle::occi::ResultSet *rs = deleteStm->executeQuery();

I suspect that in your case you've simply got uncommitted transaction. It goes that way:
session1 session2
DELETE ... table/rows is locked
SELECT * FROM ... you will see all data
DELETE ... and now you will wait and wait
until lock is released
COMMIT;
SELECT * FROM ... now resultset is empty
I strongly encourage to read Data Concurrency and Consistency

Related

How to skip rows from getting selected in SQL

How to skip rows from getting selected which are holding locks ?
Begin tran
Select *
From table with(holdlock)
Where id = 2
In second session, when query gets executed, the row which is has id value of 2 should be skipped in the result.
The (holdlock) holds the lock until the end of the transaction - but this is a shared lock - e.g. other readers aren't blocked by that lock ...
If you really must do this, you need to establish (and hold on to) an exclusive lock
Begin tran
Select *
From table with (updlock, holdlock)
Where id = 2
and use the WITH (READPAST) clause in the second session, not to be stopped by the exclusive lock.
PS: updated to use updlock, based on #charlieface's recommendation - thanks!
I guess, you can try READPAST-hint
https://www.sqlshack.com/explore-the-sql-query-table-hint-readpast/
READPAST will skip existing locks. But you need another lock hint, otherwise SELECT will not keep the lock.
XLOCK, HOLDLOCK (aka SERIALIZABLE) is an option, but can be restrictive.
You can use UPDLOCK to keep a lock until the end of the transaction. UPDLOCK generates a U lock, which blocks other U or X (exclusive writer) locks, but still allows S reader locks. So the data can still be read by a read-only process, but another process executing this same code would still be blocked.
Begin tran
Select *
From table WITH (UPDLOCK, READPAST)
Where id = 2
-- etc
I achieved this with following code in second session.
SET TRANSACTION ISOLATION LEVEL REPEATABLE READS
Select * from table WITH (Readpast)
Thank you all for your helps.

Stored Procedure for batch delete in Firebird

I need to delete a bunch of records (literally millions) but I don't want to make it in an individual statement, because of performance issues. So I created a view:
CREATE VIEW V1
AS
SELECT FIRST 500000 *
FROM TABLE
WHERE W_ID = 14
After that I do a bunch deletes for example:
DELETE FROM V1 WHERE TS < 2021-01-01
What I want is to import this logic in a While loop and in stored procedure. I tried SELECT COUNT query like this:
SELECT COUNT(*)
FROM TABLE
WHERE W_ID = 14 AND TS < 2021-01-01;
Can I use this number in the same procedure as a condition and how can I manage that?
This is what I have tried and I get an error
ERROR: Dynamic SQL Error; SQL error code = -104; Token unknown; WHILE
Code:
CREATE PROCEDURE DeleteBatch
AS
DECLARE VARIABLE CNT INT;
BEGIN
SELECT COUNT(*) FROM TABLE WHERE W_ID = 14 AND TS < 2021-01-01 INTO :cnt;
WHILE cnt > 0 do
BEGIN
IF (cnt > 0) THEN
DELETE FROM V1 WHERE TS < 2021-01-01;
END
ELSE break;
END
I just can't wrap my head around this.
To clarify, in my previous question I wanted to know how to manage the garbage_collection after many deleted records, and I did what was suggested - SELECT * FROM TABLE; or gfix -sweep and that worked very well. As mentioned in the comments the correct statement is SELECT COUNT(*) FROM TABLE;
After that another even bigger database was given to me - above 50 million. And the problem was the DB was very slow to operate with. And I managed to get the server it was on, killed with a DELETE statement to clean the database.
That's why I wanted to try deleting in batches. The slow-down problem there was purely hardware - HDD has gone, and we replaced it. After that there was no problem with executing statements and doing backup and restore to reclaim disk space.
Provided the data that you need to delete, doesn't ever need to be rollbacked once the stored procedure is kicked off, there is another way to handle massive DELETEs in a Stored Procedure.
The example stored procedure will delete the rows 500,000 at a time. It will loop until there aren't any more rows to delete. The AUTONOMOUS TRANSACTION will allow you to put each delete statement in its own transaction and it will commit immediately after the statement completes. This is issuing an implicit commit inside a stored procedure, which you normally can't do.
CREATE OR ALTER PROCEDURE DELETE_TABLEXYZ_ROWS
AS
DECLARE VARIABLE RC INTEGER;
BEGIN
RC = 9999;
WHILE (RC > 0) DO
BEGIN
IN AUTONOMOUS TRANSACTION DO
BEGIN
DELETE FROM TABLEXYZ ROWS 500000;
RC = ROW_COUNT;
END
END
SELECT COUNT(*)
FROM TABLEXYZ
INTO :RC;
END
because of performance issues
What are those exactly? I do not think you actually are improving performance, by just running delete in loops but within the same transaction, or even different TXs but within the same timespan. You seem to be solving some wrong problem. The issue is not how you create "garbage", but how and when Firebird "collects" it.
For example, Select Count(*) in Interbase/Firebird engines means natural scan over all the table and the garbage collection is often trigggered by it, which can itself get long if lot of garbage was created (and massive delete surely does, no matter if done by one million-rows statement or million of one-row statements).
How to delete large data from Firebird SQL database
If you really want to slow down deletion - you have to spread that activity round the clock, and make your client application call a deleting SP for example once every 15 minutes. You would have to add some column to the table, flagging it is marked for deletion and then do the job like that
CREATE PROCEDURE DeleteBatch(CNT INT)
AS
DECLARE ROW_ID INTEGER;
BEGIN
FOR SELECT ID FROM TABLENAME WHERE MARKED_TO_DEL > 0 INTO :row_id
DO BEGIN
CNT = CNT - 1;
DELETE FROM TABLENAME WHERE ID = :ROW_ID;
IF (CNT <= 0) THEN LEAVE;
END
SELECT COUNT(1) FROM TABLENAME INTO :ROW_id; /* force GC now */
END
...and every 15 minutes you do EXECUTE PROCEDURE DeleteBatch(1000).
Overall this probably would only be slower, because of single-row "precision targeting" - but at least it would spread the delays.
Use DELETE...ROWS.
https://firebirdsql.org/file/documentation/html/en/refdocs/fblangref25/firebird-25-language-reference.html#fblangref25-dml-delete-orderby
But as I already said in the answer to the previous question it is better to spend time investigating source of slowdown instead of workaround it by deleting data.

Roll-back last UPDATE SQL query in SQL Server 2014

I have accidentally executed the query:
UPDATE TableName
SET Name='Ram'
How can I undo this change?
Before running update, or deletes especially always test them. For an delete put the statement into a select block.
SELECT COUNT(NAME)
FROM TableName
WHERE
And make sure the number of records returned match what you are wanting to delete. FOr an update it is a little more envolved. You will have to use a transaction.
BEGIN TRANSACTION
UPDATE TableName
SET Name = 'Ram'
SELECT *
FROM TableName
WHERE Name = 'Ram'
--Rollback Transaction
--Commit Transaction
Based on what you did in the transaction just run the first part leaving commit and rollback commented out then the select will let you validate everything worked correctly then if it is what you want highlight and run just the COMMIT without the comments if it isn't what you wanted then highlight an run just the ROLLBACK without comments to undo it and try again. Hope this helps in the future.

Sql delete not appearing to work

I am running delete sql in Managment Studio 2008 and it indicated to me the sql has worked....but it hasnt.
For example
select count(*) from MyTable where [MYkey] =24
returns 1
delete from MyTable where [MYkey] = 24
Rows Affected 1
But if I immediately run the first statement again, the record is still there. If I try an update statement, that works. I am seeing this behaviour on all tables in the database.
I had a few issues with the Transaction log being full a few days ago, I change the recovery model to simple. Could this be related? If so what do I need to do?
Try this -
BEGIN TRANSACTION;
delete from MyTable where [MYkey] = 24;
COMMIT TRANSACTION;

Deadlock on query that is executed simultaneously

I've got a stored procedure that does the following (Simplified):
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRANSACTION
DECLARE #intNo int
SET #intNo = (SELECT MAX(intNo) + 1 FROM tbl)
INSERT INTO tbl(intNo)
Values (#intNo)
SELECT intNo
FROM tbl
WHERE (intBatchNumber = #intNo - 1)
COMMIT TRANSACTION
My issue is that when two or more users execute this at the same time I am getting deadlocks. Now as I understand it the moment I do my first select in the proc this should create a lock in tbl. If the second procedure is then called while the first procedure is still executing it should wait for it to complete right?
At the moment this is causing a deadlock, any ideas?
The insert query requires a different lock than the select. The lock for select blocks a second insert, but it does not block a second select. So both queries can start with the select, but they both block on the other's insert.
You can solve this by asking the first query to lock the entire table:
SET #intNo = (SELECT MAX(intNo) + 1 FROM tbl with (tablockx))
^^^^^^^^^^^^^^^
This will make the second transaction's select wait for the complete first transaction to finish.
Make it simpler so you have one statement and no transaction
--BEGIN TRANSACTION not needed
INSERT INTO tbl(intNo)
OUTPUT INSERTED.intNo
SELECT MAX(intNo) + 1 FROM tbl WITH (TABLOCK)
--COMMIT TRANSACTION not needed
Although, why aren't you using IDENTITY...?