Looping the records in stored procedure - sql

I have this query..
Begin
declare #Col int,
#lev int,
#Plan int
select #Col = 411
select #lev = 16
select #Plan = 780
--Insert into baCodeLibrary(Plan_Num,Level_Num,Column_Num,Block_Num,Code_Id,Sort_Order,isactive,Added_By,DateTime_Added,Updated_By,DateTime_Updated)
Select Distinct
#plan,
#lev,
#col,
ba_Object_Id - 5539,
ba_Object_Id,
ba_OBject_Desc,
ba_Object_Id - 5539,
1,
'xyz',
GETDATE(),
'xyz',
GETDATE()
from baObject
where ba_OBject_Id > 5539
and ba_Object_Id < 5554
end
Here I have only for the #col = 411, but I want to loop for all the column until 489
Could any body help me out how to write the loop in this query to select all the columns from 411 to 489?
Thanks in advance

How about not thinking about this in terms of a loop, and instead think about it in terms of a set?
declare #lev int,
#Plan int;
select #lev = 16,
#Plan = 780;
;WITH n(n) AS
(
SELECT TOP (489-411) Number
FROM master.dbo.spt_values
WHERE type = N'P' ORDER BY Number
)
--Insert dbo.baCodeLibrary(...)
SELECT DISTINCT
#plan,
#lev,
n.n,
ba.ba_Object_Id - 5539,
ba.ba_Object_Id,
ba.ba_Object_Desc,
ba.ba_Object_Id - 5539,
1,
'xyz',
GETDATE(),
'xyz',
GETDATE()
FROM dbo.baObject AS ba CROSS JOIN n
where ba.ba_Object_Id > 5539
and ba.ba_Object_Id < 5554;

There's no actual looping mechanism in your query as it exists now. You'll need to implement something like a while loop in order to get that functionality.
See: http://technet.microsoft.com/en-us/library/ms178642.aspx
It'll look something like
DECLARE #counter int
SET #counter = 1
WHILE #counter < 10
BEGIN
-- loop logic
SET #counter = #counter + 1
END

Looping can be performed three ways in T-SQL; a WHILE loop, a SET based operation, or a CURSOR.
T-SQL / SQL Server are optimised for SET based operations, and they are easily the most efficient way to loop, but it all depends on what you're trying to achieve.
You may be warned away from cursors, with good reason, but they are perfectly acceptable as long as you understand what you're doing. Here's an example of a very simple, fast cursor:
DECLARE #myColumn VARCHAR(100)
DECLARE cTmp CURSOR FAST_FORWARD FOR
SELECT MyColumn
FROM MyTable (nolock)
OPEN cTmp
FETCH NEXT FROM cTmp INTO #myColumn
WHILE ##FETCH_STATUS = 0
BEGIN
-- Do something with #myColumn here
FETCH NEXT FROM cTmp INTO #myColumn
END
CLOSE cTmp
DEALLOCATE cTmp
GO
Please do not use this code without reading up on cursors first - they should only be used when SET based operations are not suitable. It depends entirely on what you're trying to achieve - cursors can be very resource hungry, and can result in locked records / tables if you're not careful with the hints you apply to them.

Related

Invalid syntax in loop SQL

I have an error in syntax # in my code in postgreSQL database, but I don't know what could be wrong. Did I implemented the SQL loop code in rigth way? My SQL code:
DECLARE #Counter INT
SET #Counter=1
WHILE ( #Counter <= 1000)
BEGIN
INSERT INTO punkty (geog) SELECT ST_GeometryN(st_asText(ST_GeneratePoints(geom,1000)), #Counter) FROM panstwo
SET #Counter = #Counter + 1
END
First your code looks like T-SQL(Microsoft and Sybase dialect) and it won't work on PostgreSQL by design
Second loop is not required:
INSERT INTO punkty (geog)
SELECT ST_GeometryN(st_asText(ST_GeneratePoints(geom,1000)), s.Counter)
FROM panstwo
CROSS JOIN generate_series(1,1000) s(counter);

How to set total number of rows before an OFFSET occurs in stored procedure

I've created a stored procedure that filters and paginates for a DataTable.
Problem: I need to set an OUTPUT variable for #TotalRecords found before an OFFSET occurs, otherwise it sets #TotalRecord to #RecordPerPage.
I've messed around with CTE's and also simply trying this:
SELECT *, #TotalRecord = COUNT(1)
FROM dbo
But that doesn't work either.
Here is my stored procedure, with most of the stuff pulled out:
ALTER PROCEDURE [dbo].[SearchErrorReports]
#FundNumber varchar(50) = null,
#ProfitSelected bit = 0,
#SortColumnName varchar(30) = null,
#SortDirection varchar(10) = null,
#StartIndex int = 0,
#RecordPerPage int = null,
#TotalRecord INT = 0 OUTPUT --NEED TO SET THIS BEFORE OFFSET!
AS
BEGIN
SET NOCOUNT ON;
SELECT *
FROM
(SELECT *
FROM dbo.View
WHERE (#ProfitSelected = 1 AND Profit = 1)) AS ERP
WHERE
((#FundNumber IS NULL OR #FundNumber = '')
OR (ERP.FundNumber LIKE '%' + #FundNumber + '%'))
ORDER BY
CASE
WHEN #SortColumnName = 'FundNumber' AND #SortDirection = 'asc'
THEN ERP.FundNumber
END ASC,
CASE
WHEN #SortColumnName = 'FundNumber' AND #SortDirection = 'desc'
THEN ERP.FundNumber
END DESC
OFFSET #StartIndex ROWS
FETCH NEXT #RecordPerPage ROWS ONLY
Thank you in advance!
You could try something like this:
create a CTE that gets the data you want to return
include a COUNT(*) OVER() in there to get the total count of rows
return just a subset (based on your OFFSET .. FETCH NEXT) from the CTE
So your code would look something along those lines:
-- CTE definition - call it whatever you like
WITH BaseData AS
(
SELECT
-- select all the relevant columns you need
p.ProductID,
p.ProductName,
-- using COUNT(*) OVER() returns the total count over all rows
TotalCount = COUNT(*) OVER()
FROM
dbo.Products p
)
-- now select from the CTE - using OFFSET/FETCH NEXT, get only those rows you
-- want - but the "TotalCount" column still contains the total count - before
-- the OFFSET/FETCH
SELECT *
FROM BaseData
ORDER BY ProductID
OFFSET 20 ROWS FETCH NEXT 15 ROWS ONLY
As a habit, I prefer non-null entries before possible null. I did not reference those in my response below, and limited a working example to just the two inputs you are most concerned with.
I believe there could be some more clean ways to apply your local variables to filter the query results without having to perform an offset. You could return to a temp table or a permanent usage table that cleans itself up and use IDs that aren't returned as a way to set pages. Smoother, with less fuss.
However, I understand that isn't always feasible, and I become frustrated myself with those attempting to solve your use case for you without attempting to answer the question. Quite often there are multiple ways to tackle any issue. Your job is to decide which one is best in your scenario. Our job is to help you figure out the script.
With that said, here's a potential solution using dynamic SQL.
I'm a huge believer in dynamic SQL, and use it extensively for user based table control and ease of ETL mapping control.
use TestCatalog;
set nocount on;
--Builds a temp table, just for test purposes
drop table if exists ##TestOffset;
create table ##TestOffset
(
Id int identity(1,1)
, RandomNumber decimal (10,7)
);
--Inserts 1000 random numbers between 0 and 100
while (select count(*) from ##TestOffset) < 1000
begin
insert into ##TestOffset
(RandomNumber)
values
(RAND()*100)
end;
set nocount off;
go
create procedure dbo.TestOffsetProc
#StartIndex int = null --I'll reference this like a page number below
, #RecordsPerPage int = null
as
begin
declare #MaxRows int = 30; --your front end will probably manage this, but don't trust it. I personally would store this on a table against each display so it can also be returned dynamically with less manual intrusion to this procedure.
declare #FirstRow int;
--Quick entry to ensure your record count returned doesn't excede max allowed.
if #RecordsPerPage is null or #RecordsPerPage > #MaxRows
begin
set #RecordsPerPage = #MaxRows
end;
--Same here, making sure not to return NULL to your dynamic statement. If null is returned from any variable, the entire statement will become null.
if #StartIndex is null
begin
set #StartIndex = 0
end;
set #FirstRow = #StartIndex * #RecordsPerPage
declare #Sql nvarchar(2000) = 'select
tos.*
from ##TestOffset as tos
order by tos.RandomNumber desc
offset ' + convert(nvarchar,#FirstRow) + ' rows
fetch next ' + convert(nvarchar,#RecordsPerPage) + ' rows only'
exec (#Sql);
end
go
exec dbo.TestOffsetProc;
drop table ##TestOffset;
drop procedure dbo.TestOffsetProc;

Inserting data with SQL

Similar questions have already appeard here, but it seems like I'm doing the same as in other instructions, but it doesn't work. So I have
Declare #Counter Int
Set #Counter = 1
while #Counter <= 1000
Begin
insert into Kiso_task_table ([Numbers],[Square_root])
values ( #Counter, Sqrt(#Counter));
Set #Counter = #Counter + 1;
CONTINUE;
End
SELECT TOP (1000) [Numbers],[Square_root]
FROM [Kiso_task].[dbo].[Kiso_task_table]
and it should give me Numbers from 1 to 1000 and their square roots, respectively - instead it produces "1" all the times? Do you know what is wrong?
Your error is in the type of variable to convert the square root, it must be of the 'float' type
CREATE TABLE #Kiso_task_table
(
[Numbers] INT,
[Square_root] FLOAT,
);
GO
DECLARE #Counter INT
SET #Counter = 1
WHILE #Counter <= 1000
BEGIN
INSERT INTO #Kiso_task_table ([Numbers],[Square_root]) VALUES ( #Counter, Sqrt(#Counter));
SET #Counter = #Counter + 1
CONTINUE
END
SELECT TOP (1000) [Numbers],[Square_root]
FROM #Kiso_task_table
SQRT (Transact-SQL)
Your approach is procedural thinking. Try to start thinking set-based. That means: No CURSOR, no WHILE, no loopings, no do this and then this and finally this. Let the engine know, what you want to get back and let the engine decide how to do this.
DECLARE #mockupTarget TABLE(Number INT, Square_root DECIMAL(12,8));
WITH TallyOnTheFly(Number) AS
(
SELECT TOP 1000 ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) FROM master..spt_values
)
INSERT INTO #mockupTarget(Number,Square_root)
SELECT Number,SQRT(Number)
FROM TallyOnTheFly;
SELECT * FROM #mockupTarget ORDER BY Number;
The tally-cte will create a volatile set of 1000 numbers. This is inserted in one single statement.
Btw
I tested your code against my mockup table and it was working just fine...

SQL Server FOR EACH Loop

I have the following SQL query:
DECLARE #MyVar datetime = '1/1/2010'
SELECT #MyVar
This naturally returns '1/1/2010'.
What I want to do is have a list of dates, say:
1/1/2010
2/1/2010
3/1/2010
4/1/2010
5/1/2010
Then i want to FOR EACH through the numbers and run the SQL Query.
Something like (pseudocode):
List = 1/1/2010,2/1/2010,3/1/2010,4/1/2010,5/1/2010
For each x in List
do
DECLARE #MyVar datetime = x
SELECT #MyVar
So this would return:-
1/1/2010
2/1/2010
3/1/2010
4/1/2010
5/1/2010
I want this to return the data as one resultset, not multiple resultsets, so I may need to use some kind of union at the end of the query, so each iteration of the loop unions onto the next.
edit
I have a large query that accepts a 'to date' parameter, I need to run it 24 times, each time with a specific to date which I need to be able to supply (these dates are going to be dynamic) I want to avoid repeating my query 24 times with union alls joining them as if I need to come back and add additional columns it would be very time consuming.
SQL is primarily a set-orientated language - it's generally a bad idea to use a loop in it.
In this case, a similar result could be achieved using a recursive CTE:
with cte as
(select 1 i union all
select i+1 i from cte where i < 5)
select dateadd(d, i-1, '2010-01-01') from cte
Here is an option with a table variable:
DECLARE #MyVar TABLE(Val DATETIME)
DECLARE #I INT, #StartDate DATETIME
SET #I = 1
SET #StartDate = '20100101'
WHILE #I <= 5
BEGIN
INSERT INTO #MyVar(Val)
VALUES(#StartDate)
SET #StartDate = DATEADD(DAY,1,#StartDate)
SET #I = #I + 1
END
SELECT *
FROM #MyVar
You can do the same with a temp table:
CREATE TABLE #MyVar(Val DATETIME)
DECLARE #I INT, #StartDate DATETIME
SET #I = 1
SET #StartDate = '20100101'
WHILE #I <= 5
BEGIN
INSERT INTO #MyVar(Val)
VALUES(#StartDate)
SET #StartDate = DATEADD(DAY,1,#StartDate)
SET #I = #I + 1
END
SELECT *
FROM #MyVar
You should tell us what is your main goal, as was said by #JohnFx, this could probably be done another (more efficient) way.
You could use a variable table, like this:
declare #num int
set #num = 1
declare #results table ( val int )
while (#num < 6)
begin
insert into #results ( val ) values ( #num )
set #num = #num + 1
end
select val from #results
This kind of depends on what you want to do with the results. If you're just after the numbers, a set-based option would be a numbers table - which comes in handy for all sorts of things.
For MSSQL 2005+, you can use a recursive CTE to generate a numbers table inline:
;WITH Numbers (N) AS (
SELECT 1 UNION ALL
SELECT 1 + N FROM Numbers WHERE N < 500
)
SELECT N FROM Numbers
OPTION (MAXRECURSION 500)
declare #counter as int
set #counter = 0
declare #date as varchar(50)
set #date = cast(1+#counter as varchar)+'/01/2013'
while(#counter < 12)
begin
select cast(1+#counter as varchar)+'/01/2013' as date
set #counter = #counter + 1
end
Off course an old question. But I have a simple solution where no need of Looping, CTE, Table variables etc.
DECLARE #MyVar datetime = '1/1/2010'
SELECT #MyVar
SELECT DATEADD (DD,NUMBER,#MyVar)
FROM master.dbo.spt_values
WHERE TYPE='P' AND NUMBER BETWEEN 0 AND 4
ORDER BY NUMBER
Note : spt_values is a Mircrosoft's undocumented table. It has numbers for every type. Its not suggestible to use as it can be removed in any new versions of sql server without prior information, since it is undocumented. But we can use it as quick workaround in some scenario's like above.
[CREATE PROCEDURE [rat].[GetYear]
AS
BEGIN
-- variable for storing start date
Declare #StartYear as int
-- Variable for the End date
Declare #EndYear as int
-- Setting the value in strat Date
select #StartYear = Value from rat.Configuration where Name = 'REPORT_START_YEAR';
-- Setting the End date
select #EndYear = Value from rat.Configuration where Name = 'REPORT_END_YEAR';
-- Creating Tem table
with [Years] as
(
--Selecting the Year
select #StartYear [Year]
--doing Union
union all
-- doing the loop in Years table
select Year+1 Year from [Years] where Year < #EndYear
)
--Selecting the Year table
selec]

Fast way to insert from relational table to object-oriented table

I have a relational table with lots of columns. (import_table)
I'm trying to insert all of this data into an object-oriented database.
The object oriented database has tables:
#table (tableId, name)
#row (rowId, table_fk)
#column(colId, table_fk, col_name)
#value(valueId, col_fk, row_fk)
So far I have created a procedure that will read the import_table information_schema and insert the table and the columns correctly into the object-orientated structure.
I then copy the import_data into a temp table with an extra identity-column just to get row-ids. Then iterate through all rows, with an inner loop to iterate through each column and do an insert pr. column.
Like this:
SELECT ROWID=IDENTITY(INT, 1, 1), * INTO #TEST
FROM import_table
DECLARE #COUNTER INT = 1
WHILE #COUNTER <= (SELECT COUNT(*) FROM #TEST)
BEGIN
INSERT INTO #ROW (ROWID, TABLE_FK) VALUES(#COUNTER, 1)
DECLARE #COLUMNCOUNTER INT = 1
WHILE #COLUMNCOUNTER <= (SELECT COUNT(*) FROM #COLUMN WHERE TABLE_FK = 1)
BEGIN
DECLARE #COLNAME NVARCHAR(254) = select col_name from #column where table_fk = 1 and rowid = #columnCounter
DECLARE #INSERTSQL NVARCHAR(1000) = 'insert into #value (column_fk, row_fk, value) select '+cast(#columnCounter as nvarchar(20))', '+cast(#counter as nvarchar(20))+', ' + #colName+' from #test where rowId = '+cast(#counter as nvarchar20))
exec (#insertSQL)
set #columncounter = #columncounter +1
end
set #counter = #counter +1
end
This works, but it is extremely slow.
Any suggestions on how to speed things up?
Redesign the database.
What you have there is a property bad type of table. These are known trade off setups, and guess what - speed is the item they are bad in. Ouch.
You likely could do things faster in C# outside and then stream the inserts back in.
One way to speed up your code is to wrap your inserts in transactions (maybe one transaction per iteration of the outer loop?). Having each one in a separate transaction, as the code is now, will be very slow if there are lots of inserts.