Sequential execution of SQL Scripts that depend on the previous script - sql

I'm probably asking a silly question, but is there a way to execute SQL scripts that depend on each other?
I have a set of 20 scripts, each one is dependent on the table that the previous script creates. Currently it's a case of waiting for each one to finish and without error before setting of the next one. This was fine for a while, but now the total run time is around 15 hours, so it would be really good if i could just set this off over a weekend and leave it without having to keep an eye on things.

you can create a stored proc like this.
Create proc SPWaitforTable
#tableName varchar (255)
as
while 1=1
begin
if exists (
select name from sys.tables where name =#tableName)
return
else
waitfor delay '00:00:01'
end
you can run all your script at once, but it will wait until the other table is created to proceed

Related

How to prevent SQL Stored Procedure Alters while the stored procedure is running?

We have a stored procedure that runs hourly and requires heavy modification. There is an issue where someone will edit it while the stored proc is running and will cause the stored proc to break and end. I am looking for an error to pop up when someone tries to edit a stored procedure while it is running, rather than breaking the execution.
It's a sql server agent job that runs hourly, I get "The definition of object 'stored_procedure' has changed since it was compiled."
Is there something I can add to the procedure? A setting?
I think you can use a trigger at the database level in order to prevent changes and within the object apply validations for the running stored procedure, something like this:
USE [YourDatabase]
GO
ALTER TRIGGER [DDLTrigger_Sample]
ON DATABASE
FOR CREATE_PROCEDURE, ALTER_PROCEDURE, DROP_PROCEDURE
AS
BEGIN
IF EXISTS (SELECT TOP 1 1
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_query_plan(req.plan_handle) sqlplan WHERE sqlplan.objectid = OBJECT_ID(N'GetFinanceInformation'))
BEGIN
PRINT 'GetFinanceInformation is running and cannot be changed'
ROLLBACK
END
END
that way you can prevent the stored procedure being changed during execution, if it's not being executed changes will be reflected as usual. hope this helps.
You should do some research and testing and confirm this is the case. Altering a SProc while executing should not impact the run.
Open two SSMS windows and run query 1 first and switch to window 2 and run that query.
Query 1
CREATE PROCEDURE sp_altertest
AS
BEGIN
SELECT 'This is a test'
WAITFOR DELAY '00:00:10'
END
GO
EXEC sp_altertest
Query2
alter procedure sp_altertest AS
BEGIN
SELECT 'This is a test'
WAITFOR DELAY '00:00:06'
END
GO
Exec sp_altertest
Query 1 should continue to run and have a 10 sec execution time while query 2 will run with a 6 sec runtime. The SProc is cached at the time of run and stored in memory. The alter should have no impact.

SQL INSERT - how to execute a list of queries automatically

I've never done this, so apologies if I'm being quite quite vague.
Scenario
I need to run a long series of INSERT SQL queries. This data is inserted in a table for being processed by a web service's client, i.e. the data is uploaded on a different server and the table gets cleared as the process progresses.
What I've tried
I have tried to add a delay to each Insert statement like so
WAITFOR DELAY '00:30:00'
INSERT INTO TargetTable (TableName1, Id, Type) SELECT 'tablename1', ID1 , 1 FROM tablename1
WAITFOR DELAY '00:30:00'
INSERT INTO TargetTable (TableName2, Id, Type) SELECT 'tablename2', ID2 , 1 FROM tablename2
But this has the disadvantage of assuming that a query will finish executing in 30 minutes, which may not be the case.
Question
I have run the queries manually in the past, but that's excruciatingly tedious. So I would like to write a program that does that for me.
The program should:
Run each query in the order given
Wait to run the next query until the previous one has been processed, i.e. until the target table is clear.
I'm thinking of a script that I can copy into the command prompt console, SQL itself or whatever and run.
How do I go about this? Windows service application? Powershell function?
I would appreciate any pointers to get me started.
You need to schedule job in SQL Server
http://www.c-sharpcorner.com/UploadFile/raj1979/create-and-schedule-a-job-in-sql-server-2008/

same Query is executing with different time intervals

I have a scenario, in which i have one stored proc which contains set of sql statements( combination of joins and sub queries as well, query is large to displays)
and finally result is storing in temp table.
this is executing by user from frontend or programmer from backend with specific permissions.
here the problem is, there is difference in execution time for this query.
sometimes it is taking 10 mins, sometimes it is taking 1 hour, but an average elapsed time is 10 mins, and one common thing is always it is giving the same amount of records (approximately same).
As ErikL mentioned checking the execution plan of the query is a good start. In Oracle 11g you can use the DBMS_PROFILER. This will give you information about the offending statements. I would run it multiple times and see what the difference is between multiple run times. First check to see if you have the DBMS_PROFILER installed. I believe it comes as a seperate package.
To start the profiler:
SQL> execute dbms_profiler.start_profiler('your_procedure_name');
Run your stored procedure:
SQL> exec your_procedure_name
Stop the profiler:
SQL> execute dbms_profiler.stop_profiler;
This will show you all statements in your store procedure and their associated run time, and this way you can narrow down the problem to possibly a single query that is causing the difference.
Here is the Oracle doc on DBMS_PROFILER:
Oracle DBMS PROFILER
If you are new to oracle then you can use dbms_output or use a logging table to store intermediate execution times, that way you will know which SQL is causing the issue.
declare
run_nbr number;
begin
run_nbr = 1; -- or get it from sequence
SQL1;
log(run_nbr ,user,'sql1',sysdate);
SQL2;
log(run_nbr ,user,'sql2',sysdate);
commit;
end;
here log procedure is nothing but simple insert statements which will insert into a table say "LOG" and which has minimal columns say run_nbr, user, sql_name, execution_date
procedure log(run_nbr number, user varchar2, sql_name varchar2, execution_date date)
is
begin
insert into log values(run_nbr, user, sql_name, execution_date);
-- commit; -- Un-comment if you are using pragma autonomous_transaction
end;
This is little time consuming to put these log statements, but can give you idea about the execution times. Later once you know the issue, you simply remove/comment these lines or take a code backup of your original procedure without these log statements and re-compile it after pin-pointing the issue.
I would check the execution plan of the query, maybe there are profiles in there that are not always used.
or if that doesn't solve it, you can also try tracing the session that calls the SP from the frontend. There's a very good explanation about tracing here: http://tinky2jed.wordpress.com/technical-stuff/oracle-stuff/what-is-the-correct-way-to-trace-a-session-in-oracle/

sql server 2005 stored procedure unexpected behaviour

i have written a simple stored procedure (run as job) that checks user subscribe keyword alerts. when article
posted the stored procedure sends email to those users if the subscribed keyword matched with article title.
One section of my stored procedure is:
OPEN #getInputBuffer
FETCH NEXT
FROM #getInputBuffer INTO #String
WHILE ##FETCH_STATUS = 0
BEGIN
--PRINT #String
INSERT INTO #Temp(ArticleID,UserID)
SELECT A.ID,#UserID
FROM CONTAINSTABLE(Question,(Text),#String) QQ
JOIN Article A WITH (NOLOCK) ON A.ID = QQ.[Key]
WHERE A.ID > #ArticleID
FETCH NEXT
FROM #getInputBuffer INTO #String
END
CLOSE #getInputBuffer
DEALLOCATE #getInputBuffer
This job run every 5 minute and it checks last 50 articles.
It was working fine for last 3 months but a week before it behaved unexpectedly.
The problem is that it sends irrelevant results.
The #String contains user alert keyword and it matches to the latest articles using Full text search. The normal execution time is 3 minutes but its execution time
is 3 days (in problem).
Now the current status is its working fine but we are unable to find any reason why it sent irrelevant results.
Note: I have already removing noise words from user alert keyword.
I am using SQL Server 2005 Enterprise Edition.
I don't have the answer, but have you asked all the questions?
Does the long execution time always happen for all queries? (Yes--> corruption? disk problems?)
Or is it only for one #String? (Yes--> anything unusual about the term? Is there a "hidden" character in the string that doesn't show up in your editor?)
Does it work fine for that #String against other sets of records, maybe from a week ago? (Yes--> any unusual strings in the data rows?)
Can you reproduce it at will? (From your question, it seems that the problem is gone and you can't reproduce it.) Was it only for one person, at one time?
Hope this helps a bit!
Does the CONTAINSTABLE(Question,(Text),#String) work in an ad hoc query window? If not it may be that your Full Text search indexes are corrupt and need rebuilding
Rebuild a Full-Text Catalog
Full-Text Search How-to Topics
Also check any normal indexes on Article table, they might just need rebuilding for statistical purposes or could be corrupt too
UPDATE STATISTICS (Transact-SQL)
I'd go along with Glen Little's line of thinking here.
If a user has registered a subscribed keyword which coincidentally (or deliberately) contains some of the CONTAINSTABLE search predicates e.g. NEAR, then the query may take longer than usual. Not perhaps "minutes become days" longer, but longer.
Check for subscribed keywords containing *, ", NEAR, & and so on.
The CONTAINSTABLE function allows for a very complex set of criteria. Consider the FREETEXTTABLE function which has a lighter matching algorithm.
1) How do you know it sends irrelevant results?
If it is because user reported problem: Are you sure she didnt change her keywords between mail and report?
Can you add some automatic check at end of procedure to check if it gathered bad results? Perhaps then you can trap the cases when problems occur
2) "This job run every 5 minute and it checks last 50 articles."
Are you sure it's not related to timing? If job takes more than 5 minutes one time, what happens? A second job is starting...
You do not show your cursor declaraion, is it local or could there be some kind of interference if several processes run simultaneously? Perhaps try to add some kind of locking mechanism.
Since the cursors are nested you will want to try the following. It's my understanding that testing for zero can get you into trouble when the cursors are nested. We recently changed all of our cursors to something like this.
WHILE (##FETCH_STATUS <> -1) BEGIN
IF (##FETCH_STATUS <> -2) BEGIN
INSERT INTO #Temp(ArticleID,UserID)
SELECT A.ID,#UserID
FROM CONTAINSTABLE(Question,(Text),#String) QQ
JOIN Article A WITH (NOLOCK) ON A.ID = QQ.[Key]
WHERE A.ID > #ArticleID
END
FETCH NEXT FROM #getInputBuffer INTO #String
END

SQL Server simple Insert statement times out

I have a simple table with 6 columns. Most of the time any insert statements to it works just fine, but once in a while I'm getting a DB Timeout exception:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The statement has been terminated.
Timeout is set to 10 seconds.
I should mention that I'm using NHibernate and that the statement also include a "select SCOPE_IDENTITY()" right after the insert itself.
My thought was that the table was locked or something, but there were no other statements running on that table at that time.
All the inserts are very simple, everything looks normal in sql profiler, the table has no indices but the PK (Page fullness: 98.57 %).
Any ideas on what should I look for?
Thanks.
I think your most likely culprit is a blocking lock from another transaction (or maybe from a trigger or something else behind the scenes).
The easiest way to tell is to kick off the INSERT, and while it's hung, run EXEC SP_WHO2 in another window on the same server. This will list all of the current database activity, and has a column called BLK that will show you if any processes are currently blocked. Check the SPID of your hung connection to see if it has anything in the BLK column, and if it does, that's the process that's blocking you.
Even if you don't think there are any other statements running, the only way to know for sure is to list the current transactions using an SP like that one.
This question seems like a good place for a code snippet which I used to see the actual SQL text of the blocked and blocking queries.
The snippet below employs the convention that SP_WHO2 returns " ." text for BlockedBy for the non-blocked queries, and so it filters them out and returns the SQL text of the remaining queries (both "victim" and "culprit" ones):
--prepare a table so that we can filter out sp_who2 results
DECLARE #who TABLE(BlockedId INT,
Status VARCHAR(MAX),
LOGIN VARCHAR(MAX),
HostName VARCHAR(MAX),
BlockedById VARCHAR(MAX),
DBName VARCHAR(MAX),
Command VARCHAR(MAX),
CPUTime INT,
DiskIO INT,
LastBatch VARCHAR(MAX),
ProgramName VARCHAR(MAX),
SPID_1 INT,
REQUESTID INT)
INSERT INTO #who EXEC sp_who2
--select the blocked and blocking queries (if any) as SQL text
SELECT
(
SELECT TEXT
FROM sys.dm_exec_sql_text(
(SELECT handle
FROM (
SELECT CAST(sql_handle AS VARBINARY(128)) AS handle
FROM sys.sysprocesses WHERE spid = BlockedId
) query)
)
) AS 'Blocked Query (Victim)',
(
SELECT TEXT
FROM sys.dm_exec_sql_text(
(SELECT handle
FROM (
SELECT CAST(sql_handle AS VARBINARY(128)) AS handle
FROM sys.sysprocesses WHERE spid = BlockedById
) query)
)
) AS 'Blocking Query (Culprit)'
FROM #who
WHERE BlockedById != ' .'
Could be that the table is taking a long time to grow.
If you have the table set to grow by a large amount, and don't have instant file initialization enabled, then the query could certainly timeout every once in a while.
Check this mess out: MSDN
no other statements running on that table at that time.
What about statements running against other tables as part of a transaction? That could leave locks on the problem table.
Also check for log file or datafile growth happening at the time, if you're running SQL2005 it would show in the SQL error logs.
Our QA had some Excel connections that returned big result sets, those queries got suspended with WaitType of ASYNC_NETWORK_IO for some time. During this time all other queries timed out, so that specific insert had nothing to do with it.
look at fragmentation of the table, you could be getting page splits because of that