When I call PreparedStatement.cancel() in a JDBC application, does it actually kill it in an Oracle database? - sql

I have Java JDBC application running against an Oracle 10g Database. I set up a PreparedStatement to execute a query, and then call ps.executeQuery() to run it. Occasionally the query takes a long time, and I need to kill it. I have another thread access that PreparedStatement object, and call cancel() on it.
My question is, does this actually kill the query in the database? Or does it just sever it from the client, and the query is still running somewhere in the bowels of Oracle?
Thanks!

Please note that what I say below is based on observations and inferences of Oracle in use, and is not based on any deep understanding of Oracle's internals. None of it should be considered authoritative.
What ninesided said in their first paragraph is correct. However, beware the test suggested. Not all long-running oracle queries are the same. It seems that queries are evaluated over two phases, first a phase that combines up sufficient data to know how to return the rows in the right order, and second a phase that returns the rows filling in the gaps that it didn't compute in the first phase. The division of work between the two phases is also affected by the settings of the cost-based optimizer. e.g. First-rows vs All-rows.
Now, if the query is in phase 1, the cancel request seems at best to be queued up to be applied at the end of phase 1, which means that the query continues operating.
In phase 2, rows are returned in bunches, and after each bunch, the cancel command can take effect, so assuming the driver supports the command, the cancel request will result in the query being killed.
The specification for the JDBC cancel command does not seem to say what should happen if the query does not stop running, and therefore the command may wait for confirmation of the kill, or may timeout and return with the query still running.

The answer is that it's a quality-of-implementation issue. If you look at the javadoc for Statement.cancel(), it says it'll happen "if both the DBMS and driver support aborting an SQL statement".
In my experience with various versions of Oracle JDBC drivers, Statement.cancel() seems to do what you'd want. The query seems to stop executing promptly when cancelled.

It depends on the driver you are using, both the driver and the database need to support the cancellation of a statement in order for it to work as you want. Oracle does support this feature, but you'd need to know more about the specific driver you are using to know for sure.
Alternatively, you could run a simple test by looking at the database after initiating and canceling a long running query.

Related

Get ALL sql queries executed in the last several hours in Oracle

I need a way to collect all queries executed in Oracle DB (Oracle Database 11g) in the last several hours regardless of how fast the queries are (this will be used to calculate sql coverage after running tests against my application that has several points where queries are executed).
I cannot use tables like V$SQL since there's no guarantee that a query will remain there long enough. It seems I could use DBA_HIST_SQLTEXT but I didn't find a way to filter out queries executed before current test run.
So my question is: what table could I use to get absolutely all queries executed in the given period of time (up to 2 hours) and what DB configuration should I adjust to reach my goal?
"I need the query itself since I need to learn which queries out of all queries coded in the application are executed during the test"
The simplest way of capturing all the queries executed would be to enable Fine Grained Audit on all your database tables. You could use the data dictionary to generate policies on every application table.
Note that even when writing to an OS file such a number of policies would generate a high impact on the database, and will increase the length of time it takes to run the tests. Therefore you should only use these policies to assess your test coverage, and disable them for other test runs.
Find out more.
In the end I went with what Justin Cave suggested and instrumented Oracle JDBC driver to collect every executed query in a Set and them dump them all into a file after running the tests.

Detect and handle when a database query goes wrong

My problem is I want all my queries must return results after a limited time. AFAIK, postgres has 2 options for this: connect_timeout when open a connection to database, and statement_timeout for a query.
This lead to 2 problems:
I must estimate how much time the query is run. My approach is setup
a worst case scenario: with a preset bandwidth to db server, a query
with a lot record... to determine it, but I think this ain't a smart
way. Are there any better ideas/patterns... to handle this?
The network problem. Assume the network is bad with heavy packets
loss, high ping as hell... the query from clients, and the result from
the server are stuck ... Of course we can set a
timeout from the code, but I think it will be complicated due to
handle resource and other things, and it's duplicated with the
database timeout mechanism. Are there anyway to handle this?
Another version of the story: when a query take a long time, I want to distinguish: this query is all good, just has too many records, wait for it, and no, the query is "broken",don't wait for it...
Ps : I found this link, but this is for SQL Server 2005 :(
http://www.mssqltips.com/sqlservertip/1338/finding-a-sql-server-process-percentage-complete-with-dmvs/
As you already mentioned, it is hard to predict how long a query runs (due to the query itself and its parameters, due to network, due to server load).
Anyway you should move the SQL queries into QThreads. This allows your application from serving the GUI while the queries run.
Also I would not try to solve this by timeouts. You will get into a lot of trouble because you will fail to choose the right timeouts for each query and each situation. Instead provide a way of cancelling queries by a button or a dialog so the user can decide if it is sensible to continue waiting or not.
What you want to do:
when a query take a long time, I want to distinguish: this query is all good, just has too many records, wait for it, and no, the query is "broken",don't wait for it.
is just not going to work out. You appear to require a solution to the halting problem, a fundamentally hard problem in computer science.
You must decide how long is acceptable for a query to run, and set a timeout. There is no reliable way to predict how long it should run, except by looking at how long other similar queries took to run before. Nor is there any way to tell the difference between a correct (but slow) query and one that is going to run forever. That's particularly true when things like WITH RECURSIVE or PL/PgSQL functions are involved.
You can do the queries in a specific class the object of which resides in a separate thread and wait for a timeout for the object to quit :
databaseObject->performQuery();
QThread * th = databaseObject->thread();
th->quit();
th->wait(2000);
if(th->isRunning())
{
th->terminate();
return false;
}
else
return true;

TADOStoredProc/TADOQuery CommandTimeout...?‏

On a client is being raised the error "Timeout" to trigger some commands against the database.
My first test option for correction is to increase the CommandTimeout to 99999 ... but I am afraid that this treatment generates further problems.
Have experienced it ...?
I wonder if my question is relevant, and/or if there is another option more robust and elegant correction.
You are correct to assume that upping the timeout is not the correct approach. Typically, I look for log running queries that are running around the timeouts. They will typically stand out in the areas of duration and reads.
Then I'll work to reduce the query run time using this method:
https://www.simple-talk.com/sql/performance/simple-query-tuning-with-statistics-io-and-execution-plans/
If it's a report causing issues and you can't get it running faster, you may need to start thinking about setting up a reporting database.
CommandTimeout is a time, that the client is waiting for a response from server. If the query is run in the main VCL thread then the whole application is "frozen" and might be marked "not responding" by Windows. So, would you expect your users to wait at frozen app for 99999 sec?
Generally, leave the Timeout values at default and rather concentrate on tunning the queries as Sam suggests. If you happen to have long running queries (ie. some background data movement, calculations etc in Stored Procedures) set the CommandTimeout to 0 (=INFINITE) but run them in a separate thread.

Stopping SQL code execution

We have a huge Oracle database and I frequently fetch data using SQL Navigator (v5.5). From time to time, I need to stop code execution by clicking on the Stop button because I realize that there are missing parts in my code. The problem is, after clicking on the Stop button, it takes a very long time to complete the stopping process (sometimes it takes hours!). The program says Stopping... at the bottom bar and I lose a lot of time till it finishes.
What is the rationale behind this? How can I speed up the stopping process? Just in case, I'm not an admin; I'm a limited user who uses some views to access the database.
Two things need to happen to stop a query:
The actual Oracle process has to be notified that you want to cancel the query
If the query has made any modification to the DB (DDL, DML), the work needs to be rolled back.
For the first point, the Oracle process that is executing the query should check from time to time if it should cancel the query or not. Even when it is doing a long task (big HASH JOIN for example), I think it checks every 3 seconds or so (I'm looking for the source of this info, I'll update the answer if I find it). Now is your software able to communicate correctly with Oracle? I'm not familiar with SLQ Navigator but I suppose the cancel mechanism should work like with any other tool so I'm guessing you're waiting for the second point:
Once the process has been notified to stop working, it has to undo everything it has already accomplished in this query (all statements are atomic in Oracle, they can't be stopped in the middle without rolling back). Most of the time in a DML statement the rollback will take longer than the work already accomplished (I see it like this: Oracle is optimized to work forward, not backward). If you are in this case (big DML), you will have to be patient during rollback, there is not much you can do to speed up the process.
If your query is a simple SELECT and your tool won't let you cancel, you could kill your session (needs admin rights from another session) -- this should be instantaneous.
When you cancel a query, the Oracle client should send OCIBreak() but this isn't implemented on a Windows server, that could be the cause.
Also, have your DBA check the value of SQLNET.EXPIRE_TIME.

Log of Queries executed on SQL Server

i ran a t-sql query on SQL Server 2005 DB it was suppose to run for 4 days , but when i checked after 4 days i found the machine was abruptly closed. So i want to trace what happened with the query or in other words whats was the status of the query when machine went down
Is there a mechanism to check this
It may be very difficult to do such a post-mortem. But while it may not be possible to determine what portion of the long running query was active when the shutdown occured, it is important to try and find the root cause of the problem!
Depending on the way the query is written, SQL will attempt to roll-back any SQL that wasn't completed (or any uncommitted transaction if transactions were used). This will happen the first time the server is restarted; If you have any desire to analyze the SQL transaction log (yuck!) make a copy.
Before getting to the part of the query which may have been running at the time of the shutdown, it may be preferable to look into SQL logs (I mean application logs, with information messages, timestamps and such, not SQL transaction logs for a given database), in search of basic facts, and possibly of an error message, happening some time prior to the shutdown and that could plausibly be the origin of the problem. Do also check the Windows event logs, as they may indicate abnormal system-wide conditions that could be a cause or consequence of the SQL shutdown.
The SQL transaction log can be "decompiled" and used to understand what was readily updated/inserted/deleted in the database, however this type of info for a 4 days run may be burried quite deep and long and possibly intersped with unrelated queries. In fact an out of disk space for the SQL log could possibly cause the server to become erratic (I would expect a more elegant handling of such situation, but if somehow the drive in question was also the system drive, bad things could happen...)
Finaly, by analyzing the "4 days long" SQL script it may be possible to infer which parts were completed, thanks to various side-effects of the script. In nothing else, this review may allow putting back the database in a coherent state if necessary, or to truncate the script to exclude the parts readily completed, in preparation for a new run for completign the work.