Run For loop in parallel in SQL Server - sql

I have SQL server procedure that will take one id and process and make some inserts and this is happening in sequential. How can I do it in parallelly?
WHILE (#InnerCount <= (SELECT Count(Id) FROM #MB))
BEGIN
SELECT #StartDate=StartDate,#EndDate=EndDate,#WeekMonthName=MonthName
FROM #MB Where Id=#InnerCount
DELETE FROM facttemp
Where ScoreCardId=#ScoreCardId AND StartDate=#StartDate AND EndDate=#EndDate
TRUNCATE TABLE #temp
INSERT INTO #temp (EmployeeId,EmployeeInstanceId,UserId,ScoreCardId,ScorecardTarget,OverAllScore,Rank,MetricId,MetricName,MetricScore,MetricTarget,MetricWeightPercent,MetricBandNumber,IsQualityMetric)
EXEC [dbo].[usp_ScoreCardPreSummaryWeeklyMonthlyData] #ScoreCardId,#StartDate,#EndDate,#AccountId,#AccountInstanceId
INSERT INTO facttemp (
EmployeeId,EmployeeInstanceId,UserId,ScoreCardId,ScorecardTarget,OverAllScore,Rank,
MetricId,MetricName,MetricScore,MetricTarget,MetricWeightPercent,MetricBandNumber,IsQualityMetric,
StartDate,EndDate,Month,Year,AccountId,AccountInstanceId)
SELECT EmployeeId,EmployeeInstanceId,UserId,
ScoreCardId,ScorecardTarget,OverAllScore,Rank,
MetricId,MetricName,MetricScore,MetricTarget,MetricWeightPercent,MetricBandNumber,IsQualityMetric,
#StartDate,#EndDate,#WeekMonthName, YEAR(#StartDate),#AccountId,#AccountInstanceId
FROM #temp
SET #InnerCount=#InnerCount+1
END --End of Monthly Lop
END

Well, what I would do in this case is creating a temporary SQL job (msdb.dbo.sp_add_job) with this T-SQL script in step 1 (msdb.dbo.sp_add_jobstep) and deleting the temporary SQL job in step 2 (msdb.dbo.sp_delete_job).
That way you can execute the temporary SQL job (msdb.dbo.sp_start_job) and continu running the script while the temporary SQL job is running. You do not have to worry about the temporary SQL jobs as they all delete themselves in step 2.

Related

How to keep temporary table after running stored procedure?

I know the temporary table will be deleted after the connection is lost. But within that connection, I want to do something like
EXEC test;
SELECT * FROM #Final;
#Final is the temporary table created in the stored procedure. The stored procedure needs 30 seconds, and I want to check my #final without running stored procedure again.
If I run the script in that stored procedure, the #final can be reused in the connection. But how to use it after the EXEC test?
So, except for creating a real table, is it possible to SELECT * FROM #Final after EXEC test? If no, I'll use real table instead. Thanks!
Then you don't want a temporary table. Use either a global temporary table (##final) or a real table.
Then delete the results after you run the procedure.
I should note that the stored procedure can return a result set which you can insert into a table, using exec().
Use a global temporary Table
create proc demo as
begin
select * into ##temptable from original_table
end
exec demo;
select * from ##temptable

CTE & Temp Tables Performance Issue

The query that I've been working on for awhile now was filled with 7 Temp Tables until I had to replace them with CTE's (7 CTE's) due to OPENQUERY giving the following error when using TempTables:
Metadata discovery only supports temp tables when analyzing a single- statement batch.
When I run the Query with Temp Tables, the run duration is:
7:50
When I run the Query with CTE's, the run duration is:
15:00
Almost double the time! Is there any other alternative to OPENQUERY that might make it run faster while perhaps keeping my temp tables?
Current execution Query:
SET #XSql = 'SELECT * FROM OPENQUERY([server], ''' + REPLACE(#QSql, '''', '''''') + ''')'
EXEC(#XSql)
I used this for reference: Stored Procedure and populating a Temp table from a linked Stored Procedure with parameters
And need a optimal solution.
Open to suggestions!
Can you use EXEC ... AT SERVER? This worked fine for me:
EXEC ('CREATE TABLE #TestTable1 (ID int); CREATE TABLE #TestTable2 (ID int); SELECT * FROM #TestTable1, #TestTable2;') AT LinkedServer;

Start SQL Server Agent job when case when statement returns true

I want to create a procedure that constantly checks and compares row counts between source and target table. If source table has a higher row count then I want to execute a SQL Server Agent job and my procedure should wait till that job finishes.
For Example:
create proc 'XYZ'
case when a.count(*) > b.count(*) then sp_start_job 'SSIS_package_ABC'
wait for 'package execution completion'
I would really appreciate it if someone could point me in the right direction as I am new to SQL Server Agent.
Use IF statements instead of CASE:
DECLARE #SRC_TABLE_CNT INT,
#DEST_TABLE_CNT INT
SELECT #SRC_TABLE_CNT = COUNT(*) FROM SOURCE_TABLE
SELECT #DEST_TABLE_CNT = COUNT(*) FROM DEST_TABLE
IF #SRC_TABLE_CNT > #DEST_TABLE_CNT
BEGIN
sp_start_job 'SSIS_package_ABC'
END

Truncate Statement Taking Too much time

I have a table Which has more than 1 million records, I have created a stored Procedure to insert data in that table, before Inserting the data I need to truncate the table but truncate is taking too long.
I have read on some links that if a table is used by another person or some locks are applied then truncate takes too long time but here I am the only user and I have applied no locks on that.
Also no other transactions are open when I tried to truncate the table.
As my database is on SQL Azure I am not supposed to drop the indexes as it does not allow me to insert the data without an index.
Drop all the indexes from the table and then truncate, if you want to insert the data then insert data and after inserting the data recreate the indexes
When deleting from Azure you can get into all sorts of trouble, but truncate is almost always an issue of locking. If you can't fix that you can always do this trick when deleting from Azure.
declare #iDeleteCounter int =1
while #iDeleteCounter > 0
begin
begin transaction deletes;
with deleteTable as
(
select top 100000 * from mytable where mywhere
)
delete from deleteTable
commit transaction deletes
select #iDeleteCounter = count(1) from mytable where mywhere
print 'deleted 100000 from table'
end

How to force a running t-sql query (half done) to commit?

I have database on Sql Server 2008 R2.
On that database a delete query on 400 Million records, has been running for 4 days , but I need to reboot the machine. How can I force it to commit whatever is deleted so far? I want to reject that data which is deleted by running query so far.
But problem is that query is still running and will not complete before the server reboot.
Note : I have not set any isolation / begin/end transaction for the query. The query is running in SSMS studio.
If machine reboot or I cancelled the query, then database will go in recovery mode and it will recovering for next 2 days, then I need to re-run the delete and it will cost me another 4 days.
I really appreciate any suggestion / help or guidance in this.
I am novice user of sql server.
Thanks in Advance
Regards
There is no way to stop SQL Server from trying to bring the database into a transactionally consistent state. Every single statement is implicitly a transaction itself (if not part of an outer transaction) and is executing either all or nothing. So if you either cancel the query or disconnect or reboot the server, SQL Server will from transaction log write the original values back to the updated data pages.
Next time when you delete so many rows at once, don't do it at once. Divide the job in smaller chunks (I always use 5.000 as a magic number, meaning I delete 5000 rows at the time in the loop) to minimize transaction log use and locking.
set rowcount 5000
delete table
while ##rowcount = 5000
delete table
set rowcount 0
If you are deleting that many rows you may have a better time with truncate. Truncate deletes all rows from the table very efficiently. However, I'm assuming that you would like to keep some of the records in the table. The stored procedure below backs up the data you would like to keep into a temp table then truncates then re-inserts the records that were saved. This can clean a huge table very quickly.
Note that truncate doesn't play well with Foreign Key constraints so you may need to drop those then recreate them after cleaned.
CREATE PROCEDURE [dbo].[deleteTableFast] (
#TableName VARCHAR(100),
#WhereClause varchar(1000))
AS
BEGIN
-- input:
-- table name: is the table to use
-- where clause: is the where clause of the records to KEEP
declare #tempTableName varchar(100);
set #tempTableName = #tableName+'_temp_to_truncate';
-- error checking
if exists (SELECT [Table_Name] FROM Information_Schema.COLUMNS WHERE [TABLE_NAME] =(#tempTableName)) begin
print 'ERROR: already temp table ... exiting'
return
end
if not exists (SELECT [Table_Name] FROM Information_Schema.COLUMNS WHERE [TABLE_NAME] =(#TableName)) begin
print 'ERROR: table does not exist ... exiting'
return
end
-- save wanted records via a temp table to be able to truncate
exec ('select * into '+#tempTableName+' from '+#TableName+' WHERE '+#WhereClause);
exec ('truncate table '+#TableName);
exec ('insert into '+#TableName+' select * from '+#tempTableName);
exec ('drop table '+#tempTableName);
end
GO
You must know D(Durability) in ACID first before you understand why database goes to Recovery mode.
Generally speaking, you should avoid long running SQL if possible. Long running SQL means more lock time on resource, larger transaction log and huge rollback time when it fails.
Consider divided your task some id or time. For example, you want to insert large volume data from TableSrc to TableTarget, you can write query like
DECLARE #BATCHCOUNT INT = 1000;
DECLARE #Id INT = 0;
DECLARE #Max = ...;
WHILE Id < #Max
BEGIN
INSERT INTO TableTarget
FROM TableSrc
WHERE PrimaryKey >= #Id AND #PrimaryKey < #Id + #BatchCount;
SET #Id = #Id + #BatchCount;
END
It's ugly more code and more error prone. But it's the only way I know to deal with huge data volume.