If I query from V$SQL view, does it include SQL that are still running and not yet finished? - sql

I'm running a query below in an oracle db.
SELECT program_id, program_line#, sql_text
FROM V$SQL VS ,
ALL_USERS AU
WHERE (executions >= 1)
AND (parsing_user_id != 0)
AND (AU.user_id(+) = VS.parsing_user_id)
AND UPPER(AU.USERNAME) IN (UPPER('CARGO'))
ORDER BY last_active_time DESC;
I just wanted to ask if the result returned by this query includes sql that are still running or query that has timeout or cancelled by the user?

Yes, V$SQL shows information about queries that are still running. From Oracle's documentation:
V$SQL lists statistics on shared SQL areas without the GROUP BY clause
and contains one row for each child of the original SQL text entered.
Statistics displayed in V$SQL are normally updated at the end of query
execution. However, for long running queries, they are updated every 5
seconds. This makes it easy to see the impact of long running SQL
statements while they are still in progress.
As for the second part of your question, the answer is: It Depends. The length of time a query stays in the cache (where V$SQL gathers information,) depends on the size of your cache and the amount of unique/distinct queries that are running at any given time. If the same type of queries are frequently ran in the database (i.e. they are cached,) old queries will remain in the V$SQL view for a longer period of time than in databases where there are many distinct queries being executed (assuming everything else is the same.) Distinct queries that aren't already stored in the cache are added to the cache library, pushing older/timed out queries out of the cache. If you want to configure the amount of time queries spend in cache, you will have to configure the size of the shared pool. I would recommend reading up on the Library Cache at https://docs.oracle.com/database/121/TGDBA/tune_shared_pool.htm#TGDBA560

Related

How I can find sql query for execution plan?

Some programm generate and send queries to sql server(on high load production). I want take plan of concrete query of concrete table. I start profiler with "Showplan XML" and set filter on TextData(like %MyTable%) and DatabaseName. It show rows with xml in TextData that describe execution plans(for all queries of my table). But I know that exist 5 different sql queries for this table.
How I can match some concrete query with correspond plan without use statistic?
Is there a reason this has to be done on the production environment? Most really bad execution plans (missing indexes causing table scans etc.) will be obvious enough on a dev environment where you can use all the diagnostics you want.
Otherwise running the SQL on the query cache (as in the linked question someone else mentioned) will probably have the lowest impact as it just queries a system table rather than adding diagnostics to every query.

sql temp table join between servers

So I have a summary i need to return to the end user application.
It should accept 3 parameters DateType, StartDate, EndDate.
Date Type will determine the date field I use to filter the data.
The way i accomplished this was putting all the IDs of the records for a datetype into a TEMP table and then joining my summary to the list of IDs.
This worked fine when running on the query on the SQL server that houses the data.
However, that is a replicated server, so when I compiled to a stored proc that would be on the server with the rest of the application data, it slowed the query down. IE 2 seconds vs 50 seconds.
I think the cross join from the temp table that is created on the SQL server then joining to the tables on the replciation server, is causing the slow down.
Are there any methods or techniques that I can use to get around this and build this all in one stored procedure?
If I create 3 stored procedures with their own date range, then they are fast again. However, this means maintaining multiple stored procs for the same thing.
First off, if you are running a version of SQL Server older than 2012 SP1, one problem is that users who aren't allowed to run DBCC SHOW_STATISTICS (which is most users who aren't sysadmins, see the "Permissions" section in the documentation) don't get access to statistics on remote tables. This can severely cripple the optimizer's ability to generate a good execution plan. Upgrading SQL Server or granting more permissions can help there.
If your query involves filtering or joining on a character column, make sure the remote server is flagged in the linked server options as "collation compatible". If this option is off, SQL Server can't assume strings can be compared across the servers and it will start pumping entire tables up and down just to make sure the data ends up where the comparison has to be made.
If the execution plan is as good as it gets and it's still not good enough, one general (lame) technique is to transfer all data locally first (SELECT * INTO #localtable FROM remote.db.schema.table), then run the query as a non-distributed query. Obviously, in order for this to work, the remote table cannot be "too big" and in some cases this actually has worse performance, depending on how many rows are involved. But it's always worth considering, because the optimizer does a better job with local tables.
Another approach that avoids pulling tables together across servers is packing up data in parameters to remote stored procedure calls. Entire tables can be passed as XML through an NVARCHAR(MAX), since neither XML columns nor table-valued parameters are supported in distributed queries. The basic idea is the same: avoid the need for the the optimizer to figure out an efficient distributed query. The best approach greatly depends on your data and your query, obviously.

Azure Get Live Queries

I'm looking for a query to get the current running queries in Azure SQL. All of the T-SQL I've found do not show the running queries when I test them (for instance, run a query in one window, then look in another window at the running queries). Also, I'm not looking for anything related to the time, CPU, etc, but only the actual running query text.
When I run ...
SELECT * FROM Table --(takes 2 minutes to load)
... and run a standard information query (like from Pinal Dave or this), I don't see the above query (I assume there's another way).
select * from sys.dm_exec_requests should give you what other sessions are doing.You can join this with sys.dm_exec_sql_text to get the text if needed. sys.dm_tran_locks gives the locks hold / waiting. If this is V12 server you can also use dbcc inutbuffer. Make sure that the connection you are running is dbo / server admin

SQL Server query degrades over time from 0 to 60msec

I need some help in explaining this behavior in SQL Server 2008. I have a simple c++ program that is running a query in a loop. Here is the pseudocode of the program.
myTempid = 0;
while (1) {
execQuery(select a.id
from a,b,c,d
where a.id = b.id
and b.id = c.id
and c.id = d.id
and id.id = myTempID)
}
Here are some facts
a,b,c are empty tables
d has about 5500 rows
The query starts out taking '0msec' (i can see this from the profiler); but then after X number of iterations; it jumps to about 60msec and stays there. The X is variant; sometimes its 100.. sometimes 200. The weird thing is that once it makes the jump from 0 to 60msec; it just stays there no matter the myID.
To me it sounds like SQL Server is somehow 'de-caching' the query plan?? Does this make sense to anyone
Thanks!
The results from SQL Profiler can by tricky to interpret.
The time shown for a command includes the time for the record set to be delivered to the client. To see this, create a SELECT statement that returns at least a million rows. Run these tests is SQL Management Studio and run SQL Profiler to trace the results.
First run, send the SQL results to a temporary table (should take a second or so). Second run, send the the SQL results to the Results window (should take a few seconds). Note the run time shown in SSMS. Then note the time reported by SQL Profiler.
You should see that the the time SSMS takes to read the record set, format the results, and display them to Result window increases the duration that is reported for the query.
After all that, I'm saying that when you are running the query from your application, at that level of precision (60 ms), you cannot tell where the slow down is coming from: database, network, or application, just from the reported duration.
You should create a test script and run the query in SSMS and see if the query time degrades when your application is not part of the loop.
SQL Profiler 2008 records duration in microseconds, but only displays it in milliseconds; so rounding is an issue. Save the trace as a Trace Table and look at results in the Duration column to see the microseconds. If the problem is within SQL Server, you may see the duration increasing over time.
You can also have the Profiler return the execution plan. You can look at the execution plan before and after the duration increases and see if the execution plan is changing.

Begin and monitor progress on long running SQL queries via ajax

Is it possible to start a query that will take a significant amount of time and monitor progress from the UI via Ajax?
I considered starting the process as a "run once" job that is scheduled to run immediately. I could store the results in a temporary table for quick retrieval once it's complete. I could also log the run time of the report and average that out, to guestimate the running time for the progress bar.
I use Microsoft SQL 2005 at the moment, but I'm willing to other DBMS such as SQL 2008, MySQL, etc if necessary.
One idea, if the long running job populates another table.
You have a 2nd database connection to monitor how many rows are processed out of the source rows, and show a simple "x rows processed" every few second
SELECT COUNT(*) FROM TargetTable WITH (NOLOCK)
If you have a source table too:
SELECT COUNT(*) FROM SourceTable WITH (NOLOCK)
..then you can use "x of y rows processed"
Basically, you have to use a 2nd connection to monitor the first. However, you also need something to measure...