Delete operation takes some time to accomplish - sql

I have a method to delete records in a DB, the query is created correctly and the records are deleted but after 40 seconds to 1 minute
If i execute the query in the DB prompt the record is deleted immediately
The code i have is only :
getting the database connection
preparing the statement passing 3 variables to the "delete from" sentence
calling executeUpdate on the statement
calling commit on the connection
closing the db connection
What could it be wrong? any clue?

You are implicitly assuming that a DELETE statement is very trivial in all cases, which is not always true. At very least, it needs to find the records it want to remove in the table. This may require an entire table scan if, for example, the WHERE predicate cannot leverage an existing index.

Related

Understanding locks and query status in Snowflake (multiple updates to a single table)

While using the python connector for snowflake with queries of the form
UPDATE X.TABLEY SET STATUS = %(status)s, STATUS_DETAILS = %(status_details)s WHERE ID = %(entry_id)s
, sometimes I get the following message:
(snowflake.connector.errors.ProgrammingError) 000625 (57014): Statement 'X' has locked table 'XX' in transaction 1588294931722 and this lock has not yet been released.
and soon after that
Your statement X' was aborted because the number of waiters for this lock exceeds the 20 statements limit
This usually happens when multiple queries are trying to update a single table. What I don't understand is that when I see the query history in Snowflake, it says the query finished successfully (Succeded Status) but in reality, the Update never happened, because the table did not alter.
So according to https://community.snowflake.com/s/article/how-to-resolve-blocked-queries I used
SELECT SYSTEM$ABORT_TRANSACTION(<transaction_id>);
to release the lock, but still, nothing happened and even with the succeed status the query seems to not have executed at all. So my question is, how does this really work and how can a lock be released without losing the execution of the query (also, what happens to the other 20+ queries that are queued because of the lock, sometimes it seems that when the lock is released the next one takes the lock and have to be aborted as well).
I would appreciate it if you could help me. Thanks!
Not sure if Sergio got an answer to this. The problem in this case is not with the table. Based on my experience with snowflake below is my understanding.
In snowflake, every table operations also involves a change in the meta table which keeps track of micro partitions, min and max. This meta table supports only 20 concurrent DML statements by default. So if a table is continuously getting updated and getting hit at the same partition, there is a chance that this limit will exceed. In this case, we should look at redesigning the table updation/insertion logic. In one of our use cases, we increased the limit to 50 after speaking to snowflake support team
UPDATE, DELETE, and MERGE cannot run concurrently on a single table; they will be serialized as only one can take a lock on a table at at a time. Others will queue up in the "blocked" state until it is their turn to take the lock. There is a limit on the number of queries that can be waiting on a single lock.
If you see an update finish successfully but don't see the updated data in the table, then you are most likely not COMMITting your transactions. Make sure you run COMMIT after an update so that the new data is committed to the table and the lock is released.
Alternatively, you can make sure AUTOCOMMIT is enabled so that DML will commit automatically after completion. You can enable it with ALTER SESSION SET AUTOCOMMIT=TRUE; in any sessions that are going to run an UPDATE.

In what sequence will the Delete SQL execute?

Refer to my previous posting.
Sql Cleanup script, delete from one table that's not in the other1
Using DB2 for IBM i (As400, Db2).
I am executing the following sql as a cleanup script 3am.
DELETE FROM p6prodpf A WHERE (0 = (SELECT COUNT(*) FROM P6OPIPF B WHERE B.OPIID = A.OPIID))
I have a different process that at the same time that this sql runs inserts two records, the first record is the P6OPIPF record and then inserts the detail record into P6PRODPF.
The problem.
The P6PRODPF record is missing after the SQL cleanup ran. But remember that the process that stores the records ran the same time.
How I understand the SQL is that it go's through P6PRODPF and checks if the record is in P6OPIPF if its not in P6OPIPF then delete P6PRODPF.
But then I ran Visual Explain in I systems navigator on this SQL and got the following result.
Now I am confused.
After Visual explain It looks like the statement starts with checking P6OPIPF.
So then it reads: If there's a record in that instance of time in P6OPIPF and no record with the same key in P6PRODPF then delete the P6PRODPF.
This could explain my problem that P6PRODPF gets deleted when my process that inserts the records and the sql script runs at the same time.
So how I see it in Sequence.(My theory)
The process that inserts the two records starts.
The process that inserts the two records inserts the first record in
P6OPIPF.
At the same time the SQL cleanup runs. the query see's the P6OPIPF
record and checks if it has a P6PRODPF record. At this stage
there is still no P6PRODPF inserted so Sql thinks it needs to
delete the record in P6PRODPF.
In the same time The process that inserts the two records inserts the
second record in P6PRODPF.
And because the Sql did not see the P6PRODPF at that stage it
deletes the new inserted record in P6PRODPF leaving a P6OPIPF
record with no P6PRODPF record.
Am I correct?
What I actually want to know is just the Delete script listed above. How I understand the SQL is that it go's through P6PRODPF and checks if the record is in P6OPIPF if its not in P6OPIPF then delete P6PRODPF. But after the visual explain I can see its starts with checking P6OPIPF. So What will the delete statement first check?
The code of the insert is generated in CA PLEX generator. RPGIV code.
My one function that will insert first P6OPIPF(OperationsItem.Update.InsertRow) and then its detail in P6PRODPF(ProductDetail.Update.InsertRow).
Insert Row function
My Scheduled function code that will execute the delete Script.
Scheduled delete script function
Hope it makes sense.
Visual Explain is a useful tool for understanding what the DB is doing, particularly when trying to enhance performance.
But SQL is not a procedural language. You should not nor can you really be trying to say when I run this statement, the DB is doing this, then it's doing this.
While it might be true for that particular run, it's highly dependent on the data and resources available. You can not code a processes around the steps you see.
You really shouldn't be trying to run both processes at the same time, there's simply no way to ensure what you'll end up with; at least when using the default isolation level (probably "no commit" or "read uncommited" depending on the interface.)
If you must run both processes at the same time, you probably want to run the delete under "repeatable read" or "serializable"; which should have the effect of locking the tables being referenced so that no other process can change them.
Optionally, you could run both the delete and insert under the read stability or higher isolation levels.
To explain the Visual Explain, DB2 will check the inner expression before executing the DELETE clause - it has to, or else it won't know what rows are affected.
The reason your archived rows aren't archived is because the delete script ran before the insert script.
Do you have heard of the concepts "transaction" and "isolation"? Typically different processes running against the same database are shielded (isolated) against each other, so they are operating without seeing the immediate impact of any other transaction running at the same time. Logically two transactions (a process or sequence of SQL statements) executed at the same time are executed in a serial way.
In your case either the first process is the "first" or "second". If you repeat your tests you may see different results depending on who is "first" (logically).

Oracle 10g - Lock table where two procedures might update this same table synchronously

I want to lock a table in Oracle 10g, so that e.g. procedure A has to wait till procedure B is finished with updating. I read a about the Command LOCK TABLE, but I am not sure, if the other procedure is waiting for the lock to be aquired.
What's also possible is that another thread is calling the same stored procedure B during the update-process, I guess that since stored procedure is running in a single thread, this would be also a problem?
You wouldn't normally want to lock a whole table in Oracle, though of course locking in general is an important issue. By default if 2 sessions try to update the same row then the second will be "blocked" and will have to wait for the first to commit or rollback its change. You can use a select with for update clause to lock a row without updating it.
Instead of using a single regular table shared by all sessions, you could use a Global Temporary Table: then each session has its own copy.

How does updating or inserting while looping through a result set affect the result set itself?

suppose I fetch an RS, based on certain conditions and start looping though it , then , on certain situations , I update insert or delete records, which may have been part of this RS, using separate prepared statements.
How does this effect the result set ? My inclination is to think that since the Statement which fetched this RS was executed earlier in the process, this RS will now be blind to the changes made by my prepared statements.
Pseudocode :
Preapare Statement ps1
execute ps1 -> get Result Set rs1
loop through rs1
{
Update or delete records using other prepared statements
}
Read Consistency
Oracle guarantees that the set of data seen by a statement is consistent with respect to a single point in time and does not change during statement execution (statement-level read consistency)
That is why, If you have a query such as
insert into t
select * from t;
Oracle will simply duplicate all rows without going into an infinite loop or raising an error.
There are other implications because of this.
1) Oracle reads from the rollback segment to provide you with this read-consistent image of your data. So, if your rollback segments are nor correctly sized, or you commit across fetches, you'll get the "Snapshot too old" error, since your rollback data is no longer available.
Ok , so if that is the case , is it possible to refresh it while making updates ? I mean aside from making the cursor updateable and using the inbuilt functions of the result set.
2) Each query sees the data at the point of time it began. If by refresh you mean refiring the query, then the data you see might be different again, if you do commits in your pl/sql body or within a pl/sql loop or if some other transactions are running in your system concurrently.
It doesn't. The result set of a query/cursor is kept by the database, even if you alter or remove the rows that are the base of this result set. So you are correct, it is blind to changes made after the statement is executed.

Do I need to perform a SELECT query after executing a Query to modify my record(s) in order to view the changed results?

Each time I perform a query (INSERT, DELETE,UPDATE). After Do I need to do Select * From Table, so my info can be seen on the Grid control?
For example:
UniQuery1 is my dataset.
I'm using a TDBADvListView control.
UniQuery1.Close;
UniQuery1.SQL.Clear;
SQL_QUERY:= 'insert into ListaCamiones(Tablilla,Marca,Modelo,Color) Values ('
+QuotedStr(a1)+','+
QuotedStr(a2)+','+
QuotedStr(a3)+','+
QuotedStr(a4)+')';
UniQuery1.SQL.Text := SQL_QUERY;
UniQuery1.Execute;
Do I need to do, Select * From ListaCamiones;
So I can see the information back on my TDBADvListView?
The answer is both yes and no!
Yes in that you do have to perform a SELECT query again in order to aggregate the modified recordset, no in that you don't have to perform the query as a separate execution.
If you append a semicolon at the end of your INSERT/UPDATE/DELETE query string, and immediately follow that by the desired SELECT query, your call to Execute will simultainiously update the records and aggregate the updated recordset for display.
Additionally, I would change the way you're building your SQL string too!
const
INSERT_QUERY_STRING = 'INSERT INTO ListaCaminoes(Tablilla, Marca, Modelo, Color) VALUES ("%s", "%s", "%s", "%s"); SELECT * FROM ListaCaminoes';
// Now inside your method
UniQuery1.SQL.Text := Format(INSERT_QUERY_STRING, [a1, a2, a3, a4]);
UniQuery1.Execute;
Hope it helps!
In general, yes, because in my experience when you make database changes via SQL statements:
no database component automatically refreshes the query,
no database can refresh the data in your application when the data
has changed in the database.
I recommend that you use a separate query component (UniQuery2) to execute your SQL statement. The you can use the ReQuery method of your Query to re-execute your original query (UniQuery1). Depending on the database components you are using, your local cursor may be reset.
Alternately you can Append/Insert to add records and Edit to change records of UniQuery1. This avoids the need to re-execute your original query because the changes are added to the dataset records buffered locally by the Query component. But, re-executing the query is necessary to get records that were added/edited by other users since your query was last executed.
If you just inserted the Information to the Database you have got it already!
In some SQL-Variants (in mySQL I am shure) you can have the command "insert_id()" from the API, that returns the AUTO_INCREMENT - value of the last inserted Dataset.
If you just want to get this ID, it is the way to go (on mySQL, like I said), but if you want to have other data you have to Query it again. In a combined query (like posted before) or in two seperate queries.
Glad to help!