Is it possible to start a query that will take a significant amount of time and monitor progress from the UI via Ajax?
I considered starting the process as a "run once" job that is scheduled to run immediately. I could store the results in a temporary table for quick retrieval once it's complete. I could also log the run time of the report and average that out, to guestimate the running time for the progress bar.
I use Microsoft SQL 2005 at the moment, but I'm willing to other DBMS such as SQL 2008, MySQL, etc if necessary.
One idea, if the long running job populates another table.
You have a 2nd database connection to monitor how many rows are processed out of the source rows, and show a simple "x rows processed" every few second
SELECT COUNT(*) FROM TargetTable WITH (NOLOCK)
If you have a source table too:
SELECT COUNT(*) FROM SourceTable WITH (NOLOCK)
..then you can use "x of y rows processed"
Basically, you have to use a 2nd connection to monitor the first. However, you also need something to measure...
Related
I'm running a query below in an oracle db.
SELECT program_id, program_line#, sql_text
FROM V$SQL VS ,
ALL_USERS AU
WHERE (executions >= 1)
AND (parsing_user_id != 0)
AND (AU.user_id(+) = VS.parsing_user_id)
AND UPPER(AU.USERNAME) IN (UPPER('CARGO'))
ORDER BY last_active_time DESC;
I just wanted to ask if the result returned by this query includes sql that are still running or query that has timeout or cancelled by the user?
Yes, V$SQL shows information about queries that are still running. From Oracle's documentation:
V$SQL lists statistics on shared SQL areas without the GROUP BY clause
and contains one row for each child of the original SQL text entered.
Statistics displayed in V$SQL are normally updated at the end of query
execution. However, for long running queries, they are updated every 5
seconds. This makes it easy to see the impact of long running SQL
statements while they are still in progress.
As for the second part of your question, the answer is: It Depends. The length of time a query stays in the cache (where V$SQL gathers information,) depends on the size of your cache and the amount of unique/distinct queries that are running at any given time. If the same type of queries are frequently ran in the database (i.e. they are cached,) old queries will remain in the V$SQL view for a longer period of time than in databases where there are many distinct queries being executed (assuming everything else is the same.) Distinct queries that aren't already stored in the cache are added to the cache library, pushing older/timed out queries out of the cache. If you want to configure the amount of time queries spend in cache, you will have to configure the size of the shared pool. I would recommend reading up on the Library Cache at https://docs.oracle.com/database/121/TGDBA/tune_shared_pool.htm#TGDBA560
My colleague created a project in Talend to read data from Oracle database.
I used his project and so I have his Job context with connection parameters to Oracle DB and Talend successfully connects on that computer.
I've created a trivial job which is composed of two components: tOracleInput which should be reading data and tLogRow which should be redirecting output to Talend's terminal.
The problem is that when I start the job - data is not outputted to terminal and instead of row amount outputted per second I see Starting ... status.
Would it be connection issues, inappropriate java version on my computer or something else?
Starting... status means that the query is being executed. Usually it takes a few seconds to execute a simple query against the database. This is because of the Oracle database behavior that it starts to return the data without completing a full table scan. To use this feature you can use joins and filters, but not group by / order by.
On the other hand if you're using a view or executing a complex query, or just simply use DISTINCT it could happen that the query execution takes a few minutes. This is because the oracle database generates the ResultSet on the database side before returning the records.
I'm looking for a query to get the current running queries in Azure SQL. All of the T-SQL I've found do not show the running queries when I test them (for instance, run a query in one window, then look in another window at the running queries). Also, I'm not looking for anything related to the time, CPU, etc, but only the actual running query text.
When I run ...
SELECT * FROM Table --(takes 2 minutes to load)
... and run a standard information query (like from Pinal Dave or this), I don't see the above query (I assume there's another way).
select * from sys.dm_exec_requests should give you what other sessions are doing.You can join this with sys.dm_exec_sql_text to get the text if needed. sys.dm_tran_locks gives the locks hold / waiting. If this is V12 server you can also use dbcc inutbuffer. Make sure that the connection you are running is dbo / server admin
I am trying to optimize an SQL process using the dmv ([sys].[dm_os_wait_stats]).
Is there any way that we can see the waiting/suspended queries between a time period in the past?. Like want to have records only from 3pm today.
Currently I clean the instance every time before running the process using
DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR);
GO
I suggest that using monitoring tools such as Idera or Redgate monitor in order to monitor sql server waiting. You can also copy ([sys].[dm_os_wait_stats]) data in other table periodically.
I need some help in explaining this behavior in SQL Server 2008. I have a simple c++ program that is running a query in a loop. Here is the pseudocode of the program.
myTempid = 0;
while (1) {
execQuery(select a.id
from a,b,c,d
where a.id = b.id
and b.id = c.id
and c.id = d.id
and id.id = myTempID)
}
Here are some facts
a,b,c are empty tables
d has about 5500 rows
The query starts out taking '0msec' (i can see this from the profiler); but then after X number of iterations; it jumps to about 60msec and stays there. The X is variant; sometimes its 100.. sometimes 200. The weird thing is that once it makes the jump from 0 to 60msec; it just stays there no matter the myID.
To me it sounds like SQL Server is somehow 'de-caching' the query plan?? Does this make sense to anyone
Thanks!
The results from SQL Profiler can by tricky to interpret.
The time shown for a command includes the time for the record set to be delivered to the client. To see this, create a SELECT statement that returns at least a million rows. Run these tests is SQL Management Studio and run SQL Profiler to trace the results.
First run, send the SQL results to a temporary table (should take a second or so). Second run, send the the SQL results to the Results window (should take a few seconds). Note the run time shown in SSMS. Then note the time reported by SQL Profiler.
You should see that the the time SSMS takes to read the record set, format the results, and display them to Result window increases the duration that is reported for the query.
After all that, I'm saying that when you are running the query from your application, at that level of precision (60 ms), you cannot tell where the slow down is coming from: database, network, or application, just from the reported duration.
You should create a test script and run the query in SSMS and see if the query time degrades when your application is not part of the loop.
SQL Profiler 2008 records duration in microseconds, but only displays it in milliseconds; so rounding is an issue. Save the trace as a Trace Table and look at results in the Duration column to see the microseconds. If the problem is within SQL Server, you may see the duration increasing over time.
You can also have the Profiler return the execution plan. You can look at the execution plan before and after the duration increases and see if the execution plan is changing.