sql server cursor - sql

I want to copy data from one table (rawdata, all columns are VARCHAR) to another table (formatted with corresponding column format).
For copying data from the rawdata table into formatted table, I'm using cursor in order to identify which row is affected. I need to log that particular row in an error log table, skip it, and continue copying remaining rows.
It takes more time to copying. Is there any other way to achieve this?
this is my query
DECLARE #EntityId Varchar(16) ,
#PerfId Varchar(16),
#BaseId Varchar(16) ,
#UpdateStatus Varchar(16)
DECLARE CursorSample CURSOR FOR
SELECT EntityId, PerfId, BaseId, #UpdateStatus
FROM RawdataTable
--Returns 204,000 rows
OPEN CursorSample
FETCH NEXT FROM CursorSample INTO #EntityId,#PerfId,#BaseId,#UpdateStatus
WHILE ##FETCH_STATUS = 0
BEGIN
BEGIN TRY
--try insertting row in formatted table
Insert into FormattedTable
(EntityId,PerfId,BaseId,UpdateStatus)
Values
(Convert(int,#EntityId),
Convert(int,#PerfId),
Convert(int,#BaseId),
Convert(int,#UpdateStatus))
END TRY
BEGIN CATCH
--capture Error EntityId in errorlog table
Insert into ERROR_LOG
(TableError_Message,Error_Procedure,Error_Log_Time)
Values
(Error_Message()+#EntityId,’xxx’, GETDATE())
END CATCH
FETCH NEXT FROM outerCursor INTO #EntityId, #BaseId
END
CLOSE CursorSample
DEALLOCATE CursorSampler –cleanup CursorSample

You should just be able to use a INSERT INTO statement to put the records directly into the formatted table. INSERT INTO will perform much better than using a cursor.
INSERT INTO FormattedTable
SELECT
CONVERT(int, EntityId),
CONVERT(int, PerfId),
CONVERT(int, BaseId),
CONVERT(int, UpdateStatus)
FROM RawdataTable
WHERE
IsNumeric(EntityId) = 1
AND IsNumeric(PerfId) = 1
AND IsNumeric(BaseId) = 1
AND IsNumeric(UpdateStatus) = 1
Note that IsNumeric can sometimes return 1 for values that will then fail on CONVERT. For example, IsNumeric('$e0') will return 1, so you may need to create a more robust user defined function for determining if a string is a number, depending on your data.
Also, if you need a log of all records that could not be moved into the formatted table, just modify the WHERE clause:
INSERT INTO ErrorLog
SELECT
EntityId,
PerfId,
BaseId,
UpdateStatus
FROM RawdataTable
WHERE
NOT (IsNumeric(EntityId) = 1
AND IsNumeric(PerfId) = 1
AND IsNumeric(BaseId) = 1
AND IsNumeric(UpdateStatus) = 1)
EDIT
Rather than using IsNumeric directly, it may be better to create a custom UDF that will tell you if a string can be converted to an int. This function worked for me (albeit with limited testing):
CREATE FUNCTION IsInt(#value VARCHAR(50))
RETURNS bit
AS
BEGIN
DECLARE #number AS INT
DECLARE #numeric AS NUMERIC(18,2)
SET #number = 0
IF IsNumeric(#value) = 1
BEGIN
SET #numeric = CONVERT(NUMERIC(18,2), #value)
IF #numeric BETWEEN -2147483648 AND 2147483647
SET #number = CONVERT(INT, #numeric)
END
RETURN #number
END
GO
The updated SQL for the insert into the formatted table would then look like this:
INSERT INTO FormattedTable
SELECT
CONVERT(int, CONVERT(NUMERIC(18,2), EntityId)),
CONVERT(int, CONVERT(NUMERIC(18,2), PerfId)),
CONVERT(int, CONVERT(NUMERIC(18,2), BaseId)),
CONVERT(int, CONVERT(NUMERIC(18,2), UpdateStatus))
FROM RawdataTable
WHERE
dbo.IsInt(EntityId) = 1
AND dbo.IsInt(PerfId) = 1
AND dbo.IsInt(BaseId) = 1
AND dbo.IsInt(UpdateStatus) = 1
There may be a little weirdness around handling NULLs (my function will return 0 if NULL is passed in, even though an INT can certainly be null), but that can be adjusted depending on what is supposed to happen with NULL values in the RawdataTable.

You can put a WHERE clause in your cursor definition so that only valid records are selected in the first place. You might need to create a function to determine validity, but it should be faster than looping over them.
Actually, you might want to create a temp table of the invalid records, so that you can log the errors, then define the cursor only on the rows that are not in the temp table.

Insert into will work much more better than Cursor.
As Cursor work solely in Memory of your PC and slows down the optimization of SQL Server. We should avoid using Cursors but (of course) there are situations where usage of Cursor cannot be avoided.

Related

Customized Primary Key on SQL Server 2008 R2

I have several days trying to solve this problem, but my lack of knowledge is stopping me, I don’t know if is possible what I am trying to accomplish.
I need to have a table like this:
The first field should be a custom primary key ID (auto incremented):
YYYYMMDD-99
Where YYYMMDD is the actual day and “99” is a counter that should be incremented automatically from 01 to 99 in every new row added and need to be automatically restarted to 01 the next day.
The second field is a regular NVARCHAR(40) text field called: Name
For example, I add three rows, just introducing the “Name” of the person, the ID is automatically added:
ID Name
---------------------------
20160629-01 John
20160629-02 Katie
20160629-03 Mark
Then, the next day I add two new rows:
ID Name
-------------------------
20160630-01 Bob
20160630-02 Dave
The last two digits should be restarted, after the day changes.
And, what is all this about ?
Answer: Customer requirement.
If is possible to do it in a stored procedure, it will works for me too.
Thanks in advance!!
This is pretty easy to achieve, but a bit complicated to do so it is safe with multiple clients.
What you need is a new table (for example named IndexHelper) that actually stores the parts of the index as it should be using two columns: One has the current date properly formatted as you want it in your index and one is the current index as integer. Example:
DateString CurrentIndex
-------------------------------
20160629 13
Now you need some code that helps you get the next index value atomically, i.e. in a way that also works when more than one client try to insert at the same time without getting the same index more than once.
T-SQL comes to the rescue with its UPDATE ... OUTPUT clause, which allows you to update a table, at the same time outputting the new values as an atomic operation, which can not be interrupted.
In your case, this statement could look like this:
DECLARE #curDay NVARCHAR(10)
DELCARE #curIndex INT
DECLARE #tempTable TABLE (theDay NVARCHAR(10), theIndex INT)
UPDATE IndexHelper SET CurrentIndex = CurrentIndex + 1 OUTPUT INSERTED.DateString, INSERTED.CurrentIndex INTO #temptable WHERE CurrentDate = <code that converts CURRENT_TIMESTAMP into the string format you want>
SELECT #curDay = theDay, #curIndex = theIndex FROM #tempTable
Unfortunately you have to go the temporary table way, as it is demanded by the OUTPUT clause.
This increments the CurrentIndex field in IndexHelper atomically for the current date. You can combine both into a value like this:
DECLARE #newIndexValue NVARCHAR(15)
SET #newIndexValue = #curDay + '-' + RIGHT('00' + CONVERT(NVARCHAR, #curIndex), 2)
Now the question is: How do you handle the "go back to 01 for the next day" requirement? Also easy: Add entries into IndexHelper for 2 days in advance with the respective date and index 0. You can do this safely everytime your code is called if you check that an entry for a day is actually missing. So for today your table might look like this:
DateString CurrentIndex
-------------------------------
20160629 13
20160630 0
20160701 0
The first call tomorrow would make this look like:
DateString CurrentIndex
-------------------------------
20160629 13
20160630 1
20160701 0
20160702 0
Wrap this up into a stored procedure that does the entire INSERT process into your original table, what you get is:
Add missing entries for the next two days to IndexHelper table.
Get the next ID atomically as described above
Combine date string and ID from the UPDATE command into a single string
Use this in the INSERT command for your actual data
This results in the following stored procedure you can use to insert your data:
-- This is our "work date"
DECLARE #now DATETIME = CURRENT_DATETIME
-- These are the date strings that we need
DECLARE #today NVARCHAR(10) = CONVERT(NVARCHAR, #now, 112)
DECLARE #tomorrow NVARCHAR(10) = CONVERT(NVARCHAR, DATEADD(dd, 1, #now), 112)
DECLARE #datomorrow NVARCHAR(10) = CONVERT(NVARCHAR, DATEADD(dd, 2, #now), 112)
-- We will need these later
DECLARE #curDay NVARCHAR(10)
DELCARE #curIndex INT
DECLARE #tempTable TABLE (theDay NVARCHAR(10), theIndex INT)
DECLARE #newIndexValue NVARCHAR(15)
-- Add entries for next two days into table
-- NOTE: THIS IS NOT ATOMIC! SUPPOSED YOU HAVE A PK ON DATESTRING, THIS
-- MAY EVEN FAIL! THAT'S WHY IS USE BEGIN TRY
BEGIN TRY
IF NOT EXISTS (SELECT 1 FROM IndexHelper WHERE DateString = #tomorrow)
INSERT INTO IndexHelper (#tomorrow, 0)
END TRY
BEGIN CATCH
PRINT 'hmpf'
END CATCH
BEGIN TRY
IF NOT EXISTS (SELECT 1 FROM IndexHelper WHERE DateString = #datomorrow)
INSERT INTO IndexHelper (#datomorrow, 0)
END TRY
BEGIN CATCH
PRINT 'hmpf again'
END CATCH
-- Now perform the atomic update
UPDATE IndexHelper
SET
CurrentIndex = CurrentIndex + 1
OUTPUT
INSERTED.DateString,
INSERTED.CurrentIndex
INTO #temptable
WHERE CurrentDate = #today
-- Get the values after the update
SELECT #curDay = theDay, #curIndex = theIndex FROM #tempTable
-- Combine these into the new index value
SET #newIndexValue = #curDay + '-' + RIGHT('00' + CONVERT(NVARCHAR, #curIndex), 2)
-- PERFORM THE INSERT HERE!!
...
One way to achieve customised auto increment is using INSTEAD OF trigger in SQL Server.
https://msdn.microsoft.com/en-IN/library/ms189799.aspx
I have tested this using below code.
This might be helpful.
It is written with the assumption that maximum 99 records will be inserted in a given day.
You will have to modify it to handle more than 99 records.
CREATE TABLE dbo.CustomerTb(
ID VARCHAR(50),
Name VARCHAR(50)
)
GO
CREATE TRIGGER dbo.InsertCustomerTrigger ON dbo.CustomerTb INSTEAD OF INSERT
AS
BEGIN
DECLARE #MaxID SMALLINT=0;
SELECT #MaxID=ISNULL(MAX(RIGHT(ID,2)),0)
FROM dbo.CustomerTb
WHERE LEFT(ID,8)=FORMAT(GETDATE(),'yyyyMMdd');
INSERT INTO dbo.CustomerTb(
ID,
Name
)
SELECT FORMAT(GETDATE(),'yyyyMMdd')+'-'+RIGHT('00'+CONVERT(VARCHAR(5),ROW_NUMBER() OVER(ORDER BY Name)+#MaxID),2),
Name
FROM inserted;
END
GO
TEST CASE 1
INSERT INTO dbo.CustomerTb(NAME) VALUES('A'),('B');
SELECT * FROM dbo.CustomerTb;
TEST CASE 2
INSERT INTO dbo.CustomerTb(NAME) VALUES('P'),('Q');
SELECT * FROM dbo.CustomerTb;

**Occasional** Arithmetic overflow error converting expression to data type int

I'm running an update script to obfuscate data and am occasionally experiencing the arithmetic overflow error message, as in the title. The table being updated has 260k records and yet the update script will need to be run several times to produce the error. Although it's so rare I can't rely on the code until it's fixed as it's a pain to debug.
Looking at other similar questions, this is often resolved by changing the data type e.g from INT to BIGINT either in the table or in a calculation. However, I can't see where this could be required. I've reduced the script to the below as I've managed to pin point it to the update of one column.
A function is being called by the update and I've included this below. I suspect that, due to the randomness of the error, the use of the NEW_ID function could be causing it but I haven't been able to re-create the error when just running this part of the function multiple times. The NEW_ID function can't be used in functions so it's being called from a view, also included below.
Update script:
UPDATE dbo.Addresses
SET HouseNumber = CASE WHEN LEN(HouseNumber) > 0
THEN dbo.fn_GenerateRandomString (LEN(HouseNumber), 1, 1, 1)
ELSE HouseNumber
END
NEW_ID view and random string function
CREATE VIEW dbo.vw_GetNewID
AS
SELECT NEWID() AS New_ID
CREATE FUNCTION dbo.fn_GenerateRandomString (
#stringLength int,
#upperCaseBit bit,
#lowerCaseBit bit,
#numberBit bit
)
RETURNS nvarchar(100)
AS
BEGIN
-- Sanitise string length values.
IF ISNULL(#stringLength, -1) < 0
SET #stringLength = 0
-- Generate a random string from the specified character sets.
DECLARE #string nvarchar(100) = ''
SELECT
#string += c2
FROM
(
SELECT TOP (#stringLength) c2 FROM (
SELECT c1 FROM
(
VALUES ('A'),('B'),('C')
) AS T1(c1)
WHERE #upperCaseBit = 1
UNION ALL
SELECT c1 FROM
(
VALUES ('a'),('b'),('c')
) AS T1(c1)
WHERE #lowerCaseBit = 1
SELECT c1 FROM
(
VALUES ('0'),('1'),('2'),('3'),('4'),('5'),('6'),('7'),('8'),('9')
) AS T1(c1)
WHERE #numberBit = 1
)
AS T2(c2)
ORDER BY (SELECT ABS(CHECKSUM(New_ID)) from vw_GetNewID)
) AS T2
RETURN #string
END
Addresses table (for testing):
CREATE TABLE dbo.Addresses(HouseNumber nchar(32) NULL)
INSERT Addresses(HouseNumber)
VALUES ('DSjkmf jkghjsh35hjk h2jkhj3h jhf'),
('SDjfksj3548 ksjk'),
(NULL),
(''),
('2a'),
('1234567890'),
('An2b')
Note: only 7k of the rows in the addresses table have a value entered i.e. LEN(HouseNumber) > 0.
An arithmetic overflow in what is otherwise string-based code is confounding. But there is one thing that could be causing the arithmetic overflow. That is your ORDER BY clause:
ORDER BY (SELECT ABS(CHECKSUM(New_ID)) from vw_GetNewID)
CHECKSUM() returns an integer, whose range is -2,147,483,648 to 2,147,483,647. Note the absolute value of the smallest number is 2,147,483,648, and that is just outside the range. You can verify that SELECT ABS(CAST('-2147483648' as int)) generates the arithmetic overflow error.
You don't need the checksum(). Alas, you do need the view because this logic is in a function and NEWID() is side-effecting. But, you can use:
ORDER BY (SELECT New_ID from vw_GetNewID)
I suspect that the reason you are seeing this every million or so rows rather than every 4 billion rows or so is because the ORDER BY value is being evaluated multiple times for each row as part of the sorting process. Eventually, it is going to hit the lower limit.
EDIT:
If you care about efficiency, it is probably faster to do this using string operations rather than tables. I might suggest this version of the function:
CREATE VIEW vw_rand AS SELECT rand() as rand;
GO
CREATE FUNCTION dbo.fn_GenerateRandomString (
#stringLength int,
#upperCaseBit bit,
#lowerCaseBit bit,
#numberBit bit
)
RETURNS nvarchar(100)
AS
BEGIN
DECLARE #string NVARCHAR(255) = '';
-- Sanitise string length values.
IF ISNULL(#stringLength, -1) < 0
SET #stringLength = 0;
DECLARE #lets VARCHAR(255) = '';
IF (#upperCaseBit = 1) SET #lets = #lets + 'ABC';
IF (#lowerCaseBit = 1) SET #lets = #lets + 'abc';
IF (#numberBit = 1) SET #lets = #lets + '0123456789';
DECLARE #len int = len(#lets);
WHILE #stringLength > 0 BEGIN
SELECT #string += SUBSTRING(#lets, 1 + CAST(rand * #len as INT), 1)
FROM vw_rand;
SET #stringLength = #stringLength - 1;
END;
RETURN #string
END;
As a note: rand() is documented as being exclusive of the end of its range, so you don't have to worry about it returning exactly 1.
Also, this version is subtly different from your version because it can pull the same letter more than once (and as a consequence can also handle longer strings). I think this is actually a benefit.

comparable varchar "arrays" in seperate fields but on same row

I have a table that looks like this:
memberno(int)|member_mouth (varchar)|Inspected_Date (varchar)
-----------------------------------------------------------------------------
12 |'1;2;3;4;5;6;7' |'12-01-01;12-02-02;12-03-03' [7 members]
So by looking at how this table has been structured (poorly yes)
The values in the member_mouth field is a string that is delimited by a ";"
The values in the Inspected_Date field is a string that is delimited by a ";"
So - for each delimited value in member_mouth there is an equal inspected_date value delimited inside the string
This table has about 4Mil records, we have an application written in C# that normalizes the data and stores it in a separate table. The problem now is because of the size of the table it takes a long time for this to process. (the example above is nothing compared to the actual table, it's much larger and has a couple of those string "array" fields)
My question is this: What would be the best and fastest way to normilize this data in MSSQL proc? let MSSQL do the work and not a C# app?
The best way will be SQL itself. The way followed in the below code is something which worked for me well with 2-3 lakhs of data.
I am not sure about the below code when it comes to 4 Million, but may help.
Declare #table table
(memberno int, member_mouth varchar(100),Inspected_Date varchar(400))
Insert into #table Values
(12,'1;2;3;4;5;6;7','12-01-01;12-02-02;12-03-03;12-04-04;12-05-05;12-07-07;12-08-08'),
(14,'1','12-01-01'),
(19,'1;5;8;9;10;11;19','12-01-01;12-02-02;12-03-03;12-04-04;12-07-07;12-10-10;12-12-12')
Declare #tableDest table
(memberno int, member_mouth varchar(100),Inspected_Date varchar(400))
The table will be like.
Select * from #table
See the code from here.
------------------------------------------
Declare #max_len int,
#count int = 1
Set #max_len = (Select max(Len(member_mouth) - len(Replace(member_mouth,';','')) + 1)
From #table)
While #count <= #max_len
begin
Insert into #tableDest
Select memberno,
SUBSTRING(member_mouth,1,charindex(';',member_mouth)-1),
SUBSTRING(Inspected_Date,1,charindex(';',Inspected_Date)-1)
from #table
Where charindex(';',member_mouth) > 0
union
Select memberno,
member_mouth,
Inspected_Date
from #table
Where charindex(';',member_mouth) = 0
Delete from #table
Where charindex(';',member_mouth) = 0
Update #table
Set member_mouth = SUBSTRING(member_mouth,charindex(';',member_mouth)+1,len(member_mouth)),
Inspected_Date = SUBSTRING(Inspected_Date,charindex(';',Inspected_Date)+1,len(Inspected_Date))
Where charindex(';',member_mouth) > 0
Set #count = #count + 1
End
------------------------------------------
Select *
from #tableDest
Order By memberno
------------------------------------------
Result.
You can take a reference here.
Splitting delimited values in a SQL column into multiple rows
Do it on SQl server side, if possible a SSIS package would be great.

SQL Server 2008 inconsistent results

I just released some code into production that is randomly causing errors. I already fixed the problem by totally changing the way I was doing the query. However, it still bothers me that I don't know what was causing the problem in the first place so was wondering if someone might know the answer. I have the following query inside of a stored procedure. I'm not looking for comments about that's not a good practice to make queries with nested function calls and things like that :-). Just really want to find out why it doesn't work consistently. Randomly the function in the query will return a non-numeric value and cause an error on the join. However, if I immediately rerun the query it works fine.
SELECT cscsf.cloud_server_current_software_firewall_id,
dbo.fn_GetCustomerFriendlyFromRuleName(cscsf.rule_name, np.policy_name) as rule_name,
cscsf.rule_action,
cscsf.rule_direction,
cscsf.source_address,
cscsf.source_mask,
cscsf.destination_address,
cscsf.destination_mask,
cscsf.protocol,
cscsf.port_or_port_range,
cscsf.created_date_utc,
cscsf.created_by
FROM CLOUD_SERVER_CURRENT_SOFTWARE_FIREWALL cscsf
LEFT JOIN CLOUD_SERVER cs
ON cscsf.cloud_server_id = cs.cloud_server_id
LEFT JOIN CLOUD_ACCOUNT cla
ON cs.cloud_account_id = cla.cloud_account_id
LEFT JOIN CONFIGURATION co
ON cla.configuration_id = co.configuration_id
LEFT JOIN DEDICATED_ACCOUNT da
ON co.dedicated_account_id = da.dedicated_account_id
LEFT JOIN CORE_ACCOUNT ca
ON da.core_account_number = ca.core_account_id
LEFT JOIN NETWORK_POLICY np
ON np.network_policy_id = (select dbo.fn_GetIDFromRuleName(cscsf.rule_name))
WHERE cs.cloud_server_id = #cloud_server_id
AND cs.current_software_firewall_confg_guid = cscsf.config_guid
AND ca.core_account_id IS NOT NULL
ORDER BY cscsf.rule_direction, cscsf.cloud_server_current_software_firewall_id
if you notice the join
ON np.network_policy_id = (select dbo.fn_GetIDFromRuleName(cscsf.rule_name))
calls a function.
Here is that function:
ALTER FUNCTION [dbo].[fn_GetIDFromRuleName]
(
#rule_name varchar(100)
)
RETURNS varchar(12)
AS
BEGIN
DECLARE #value varchar(12)
SET #value = dbo.fn_SplitGetNthRow(#rule_name, '-', 2)
SET #value = dbo.fn_SplitGetNthRow(#value, '_', 2)
SET #value = dbo.fn_SplitGetNthRow(#value, '-', 1)
RETURN #value
END
Which then calls this function:
ALTER FUNCTION [dbo].[fn_SplitGetNthRow]
(
#sInputList varchar(MAX),
#sDelimiter varchar(10) = ',',
#sRowNumber int = 1
)
RETURNS varchar(MAX)
AS
BEGIN
DECLARE #value varchar(MAX)
SELECT #value = data_split.item
FROM
(
SELECT *, ROW_NUMBER() OVER (ORDER BY (SELECT 1)) as row_num FROM dbo.fn_Split(#sInputList, #sDelimiter)
) AS data_split
WHERE
data_split.row_num = #sRowNumber
IF #value IS NULL
SET #value = ''
RETURN #value
END
which finally calls this function:
ALTER FUNCTION [dbo].[fn_Split] (
#sInputList VARCHAR(MAX),
#sDelimiter VARCHAR(10) = ','
) RETURNS #List TABLE (item VARCHAR(MAX))
BEGIN
DECLARE #sItem VARCHAR(MAX)
WHILE CHARINDEX(#sDelimiter,#sInputList,0) <> 0
BEGIN
SELECT #sItem=RTRIM(LTRIM(SUBSTRING(#sInputList,1,CHARINDEX(#sDelimiter,#sInputList,0)-1))), #sInputList=RTRIM(LTRIM(SUBSTRING(#sInputList,CHARINDEX(#sDelimiter,#sInputList,0)+LEN(#sDelimiter),LEN(#sInputList))))
IF LEN(#sItem) > 0
INSERT INTO #List SELECT #sItem
END
IF LEN(#sInputList) > 0
INSERT INTO #List SELECT #sInputList -- Put the last item in
RETURN
END
The reason it is "randomly" returning different things has to do with how SQL Server optimizes queries, and where they get short-circuited.
One way to fix the problem is the change the return value of fn_GetIDFromRuleName:
return (case when isnumeric(#value) then #value end)
Or, change the join condition:
on np.network_policy_id = (select case when isnumeric(dbo.fn_GetIDFromRuleName(cscsf.rule_name)) = 1)
then dbo.fn_GetIDFromRuleName(cscsf.rule_name) end)
The underlying problem is order of evaluation. The reason the "case" statement fixes the problem is because it checks for a numeric value before it converts and SQL Server guarantees the order of evaluation in a case statement. As a note, you could still have problems with converting numbers like "6e07" or "1.23" which are numeric, but not integers.
Why does it work sometimes? Well, clearly the query execution plan is changing, either statically or dynamically. The failing case is probably on a row that is excluded by the WHERE condition. Why does it try to do the conversion? The question is where the conversion happens.
WHere the conversion happens depends on the query plan. This may, in turn, depend on when the table cscf in question is read. If it is already in member, then it might be read and attempted to be converted as a first step in the query. Then you would get the error. In another scenario, the another table might be filtererd, and the rows removed before they are converted.
In any case, my advice is:
NEVER have implicit conversion in queries.
Use the case statement for explicit conversions.
Do not rely on WHERE clauses to filter data to make conversions work. Use the case statement.

SQL select multiple rows of data then compare

What would be the best approach in SQL Server 2008 to select something that can contain 10 list of data, then compare that data with a specific value in one of it's columns
So something like this below
SELECT bType FROM WORK_STATION WHERE nFileId = 123456789
Which could return either 1 - 10 values MAX (will return at least one value). Then to compare the data from that SQL statement above that we just selected to a specific value to something like
if bType = 1
--DO something
What is the best approach of doing something like this?
declare #table as table(btype int)
declare #btype int
insert into #table
SELECT bType FROM WORK_STATION WHERE nFileId = 123456789
while(exists(select top 1 'x' from #table)) --as long as #table contains records continue
begin
select top 1 #btype = btype from #table
if(#btype = 10)
print 'something'
delete top (1) from #table --remove the previously processed row. also ensures no infinite loop
end
I think you can use SP to declare variables and then compare it with the resultset, if you know that you have only 10 values you can use temp table and insert 10 values.
I hope this is helpful.