Loop SQL Query For Specific Time - sql

I have a sql query that I need to loop through the system views sys.dm_exec_requests and sys.dm_exec_sessions every 60 seconds to pull specific information and dump it into a separate table. After a specified time I would like it to kill the loop. How would the loop be formatted?

This sounds like a SQL Agent job. If so, the short form of the answer is:
Create the job with one step that runs the query
Add a Schedule that runs it once a minute, starting whenever you want it to start
Set the schedule to stop running it when the cut-off time is reached
The long form, of course, is all the detail work behind creating a SQL Agent job. Best to read up on them in Books Online (here)

Don't do this in a loop. Do it with a job.
Write a sproc that does the query and save the results and then call it from a job.

I think you should use a job as well. But some work environments that is not practical. So you could have something like:
WHILE #StopTime < getdate()
BEGIN
exec LogCurrentData
WAITFOR DELAY '00:01:00'; -- wait 1 minute
END

I think the best way is creating a Job
There is a post that explain how to create a job step by step (with images) in SQL Server.
You can visit the post here
If you prefer a video tutorial, you can visit this link

Related

How to calculate accumulated sum of query timings?

I have a sql file running many queries. I want to see the accumualted sum of all queries. I know that if I turn on timing, or call
\timing
query 1;
query 2;
query 3;
...
query n;
at the beginning of the script, it will start to show time it takes for each query to run. However, I need to have the accumulate results of all queries, without having to manually add them.
Is there a systematic way? If not, how can I fetch the interim times to throw them in a variable.
The pg_stat_statements is a good module that provides a means for tracking execution statistics.
First, add pg_stat_statements to shared_preload_libraries in the
postgresql.conf file. To know where this .conf file exists in your
filesystem, run show config_file;
shared_preload_libraries = 'pg_stat_statements'
Restart Postgres database
Create the extension
CREATE EXTENSION pg_stat_statements;
Now, the module provides a View, pg_stat_statements, which helps you to analyze various query execution metrics.
Reset the contents of stat collected before running queries.
SELECT pg_stat_statements_reset();
Now, execute your script file containing queries.
\i script_file.sql
You may get all the timing statistics of all the queries executed. To get the total time taken, simply run
select sum(total_time) from pg_stat_statements
where query !~* 'pg_stat_statements';
The time you get is in milliseconds, which may be converted to desired format using various timestamp related Postgres functions
If you want to time the whole script, on linux or mac you can use the time utility to launch the script.
The measurement in this case is a bit more than the sum of the raw query times, because it includes some overhead of starting and running the psql command. On my system this overhead is around 20ms.
$ time psql < script.sql
…
real 0m0.117s
user 0m0.008s
sys 0m0.007s
The real value is the time it took to execute the whole script, including the aforementioned overhead.
The approach in this answer is a crude, simple client side way to measure the runtime of the overall script. It is not useful to measure milli-second precision server side execution times. It still might be sufficient for many use-cases.
The solution of Kaushik Nayak is a way more precise method to time executions directly on the server. It also provides much more insight into the execution (eg. query level times).

Measure execution time in Altibase DBMS

I need to measure the execution time in a query in the ALTIBASE DBMS.
For example, I need to now in ms the time it took a SELECT * FROM example;
I tried to use the methods that have some traditional DBMS like MySQL or Oracle and doesn't work.
There are some method. The first one you can use the log from altibase to analyze. And the other method is to coding some program that recording time from using the SELECT to get result. Now, I also learn to use altibase - We can together.
I found the solution
After entering to the client with next command isql -u sys -p manager -sysdba you need to execute the next query
SET TIMING ON;
Next the following queries will have the execution time.

How to find out if scheduled sql job did not run

Let's say I have the following scenario:
In SQL Server Agent, there is a scheduled job that runs everyday at 6 AM and it works fine everyday. One day the server fails at 5 AM til 8 AM, but when the server is up again is 2 hours later than when the job that was scheduled to run.
How can I check that the job should have had run previously and indicate some kind of "Missed Schedule" status?
Approach:
Run EXEC MSDB.DBO.SP_GET_COMPOSITE_JOB_INFO #job_id and check that last_run_time & last_run_time exists in [msdb].[dbo].[sysjobhistory], if it doesn't exist it means that job was scheduled but it did not run. Is this approach correct?
Any suggestions/comments?
Thanks in advance.
I was just trying to figure this seemingly - had to already be solved problem - out. Seems like it isn't as common a need as one might think. Anyhow, I was about to embark on the journey of writing my own check when I found the below. It seems to work, in that it caught the job I wanted it to find on my server.
The issue with your approach is that if it is a fairly quickly recurring job or it runs every day but skipped Saturday and you check on Monday. It seemed to me to do this right, you needed to get into the frequency and build a table of when it ought to have run then go check history and see if it ran. The below is long and will take time to understand but it does seem to be doing that.
http://www.nuronconsulting.com/TemplateScripts_SQL_Agent_JobsJobScript_FindMissedJobs_sql.aspx

How do I display the query time when a query completes in Vertica?

When using vsql, I would like to see how long a query took to run once it completes. For example when i run:
select count(distinct key) from schema.table;
I would like to see an output like:
5678
(1 row)
total query time: 55 seconds.
If this is not possible, is there another way to measure query time?
In vsql type:
\timing
and then hit Enter. You'll like what you'll see :-)
Repeating that will turn it off.
Regarding the other part of your question:
is there another way to measure query time?
Vertica can log a history of all queries executed on the cluster which is another source of query time. Before 6.0 the relevant system table was QUERY_REPO, starting with 6.0 it is QUERY_REQUESTS.
Assuming you're on 6.0 or higher, QUERY_REQUESTS.REQUEST_DURATION_MS will give you the query duration in milliseconds.
Example of how you might use QUERY_REQUESTS:
select *
from query_requests
where request_type = 'QUERY'
and user_name = 'dbadmin'
and start_timestamp >= CURRENT_DATE
and request ilike 'select%from%schema.table%'
order by start_timestamp;
The QUERY_PROFILES.QUERY_DURATION_US and RESOURCE_ACQUISITIONS.DURATION_MS columns may also be of interest to you. Here are the short descriptions of those tables in case you're not already familiar:
RESOURCE_ACQUISITIONS - Retains information about resources (memory, open file handles, threads) acquired by each running request for each resource pool in the system.
QUERY_PROFILES - Provides information about queries that have run.
I'm not sure how to enable that in vsql or if that's possible. But you could get that information from a script.
Here's the psuedocode (I used to use perl):
print time
system("vsql -c 'select * from table'");
print time
Or put time into a variable and do some subtraction.
The other option is to use some tool like Toad to connect to Vertica instead of using vsql.

How to remove a tuple from an SQL table after a timeout?

I am faced with a peculiar requirement which is as follows:
A network-intensive operation is triggered to a server by multiple clients, through a web-interface. However, only one operation is allowed at a time, and hence an entry(tuple) is made in an SQL table to indicate that the operation is in progress. Once the operation is complete (irrespective of success or failure), the appropriate result is displayed back to the client(s), and the corresponding tuple is removed from the SQL table.
Since the operation is network-intensive, a scenario where the operation needs to be "considered" to be cancelled, after some timeout (10 minutes) has to be introduced.
Is there ANY way the lifetime of a row in SQL be associated with a timeout value, so that is is deleted after certain time? My application is primarily written in Java 1.5 and EJB 3.0, using JPA/Hibernate to access Oracle 10g DB engine.
Thanks in advance.
Regards,
Nagendra U M
I would suggest that you try using a timestamp column containing the start time of the task.
A before trigger can be then made to delete the old column before a new one is inserted if the task timed out.
If you want to have multiple tasks with different timeouts, you can even add a column with the timeout in seconds. Just code your trigger accordingly.
I don't know that Oracle has this kind of facility but I think no db engine have this.
If you want to do it at DB level,
you must have a datetime column, e.g.; 'CreatedDate' in
table. This column will have
datetime when record was created.
Write a procedure and put it in a
schedule job. This job will run after every 10 minutes and remove the 10 minutes old records. The query will be like
this.
T-SQL: Please convert it according to your db engine.
DELETE FROM yourtable WHERE CreatedDate < DATEADD(mi, -10, GETDATE())
This will delete all records older than 10 minutes from table.
This is just to give you idea of schedule job. It is in SQL Server. I don't know about Oracle
step_by_step_guide_to_add_a_sql_job_in_sql_server_2005
It sounds like you're implementing a mutex using the database, take a look at this question and see if it helps? Sounds like transactional access to a flag table will solve this for you, as long as you catch both success & failure states in your server code.