How do I page results in SQL Server 2005?
I tried it in SQL Server 2000, but there was no reliable way to do this. I'm now wondering if SQL Server 2005 has any built in method?
What I mean by paging is, for example, if I list users by their username, I want to be able to only return the first 10 records, then the next 10 records and so on.
Any help would be much appreciated.
You can use the Row_Number() function.
Its used as follows:
SELECT Row_Number() OVER(ORDER BY UserName) As RowID, UserFirstName, UserLastName
FROM Users
From which it will yield a result set with a RowID field which you can use to page between.
SELECT *
FROM
( SELECT Row_Number() OVER(ORDER BY UserName) As RowID, UserFirstName, UserLastName
FROM Users
) As RowResults
WHERE RowID Between 5 AND 10
etc
If you're trying to get it in one statement (the total plus the paging). You might need to explore SQL Server support for the partition by clause (windowing functions in ANSI SQL terms). In Oracle the syntax is just like the example above using row_number(), but I have also added a partition by clause to get the total number of rows included with each row returned in the paging (total rows is 1,262):
SELECT rn, total_rows, x.OWNER, x.object_name, x.object_type
FROM (SELECT COUNT (*) OVER (PARTITION BY owner) AS TOTAL_ROWS,
ROW_NUMBER () OVER (ORDER BY 1) AS rn, uo.*
FROM all_objects uo
WHERE owner = 'CSEIS') x
WHERE rn BETWEEN 6 AND 10
Note that I have where owner = 'CSEIS' and my partition by is on owner. So the results are:
RN TOTAL_ROWS OWNER OBJECT_NAME OBJECT_TYPE
6 1262 CSEIS CG$BDS_MODIFICATION_TYPES TRIGGER
7 1262 CSEIS CG$AUS_MODIFICATION_TYPES TRIGGER
8 1262 CSEIS CG$BDR_MODIFICATION_TYPES TRIGGER
9 1262 CSEIS CG$ADS_MODIFICATION_TYPES TRIGGER
10 1262 CSEIS CG$BIS_LANGUAGES TRIGGER
The accepted answer for this doesn't actually work for me...I had to jump through one more hoop to get it to work.
When I tried the answer
SELECT Row_Number() OVER(ORDER BY UserName) As RowID, UserFirstName, UserLastName
FROM Users
WHERE RowID Between 0 AND 9
it failed, complaining that it didn't know what RowID was.
I had to wrap it in an inner select like this:
SELECT *
FROM
(SELECT
Row_Number() OVER(ORDER BY UserName) As RowID, UserFirstName, UserLastName
FROM Users
) innerSelect
WHERE RowID Between 0 AND 9
and then it worked.
When I need to do paging, I typically use a temporary table as well. You can use an output parameter to return the total number of records. The case statements in the select allow you to sort the data on specific columns without needing to resort to dynamic SQL.
--Declaration--
--Variables
#StartIndex INT,
#PageSize INT,
#SortColumn VARCHAR(50),
#SortDirection CHAR(3),
#Results INT OUTPUT
--Statements--
SELECT #Results = COUNT(ID) FROM Customers
WHERE FirstName LIKE '%a%'
SET #StartIndex = #StartIndex - 1 --Either do this here or in code, but be consistent
CREATE TABLE #Page(ROW INT IDENTITY(1,1) NOT NULL, id INT, sorting_1 SQL_VARIANT, sorting_2 SQL_VARIANT)
INSERT INTO #Page(ID, sorting_1, sorting_2)
SELECT TOP (#StartIndex + #PageSize)
ID,
CASE
WHEN #SortColumn='FirstName' AND #SortDirection='ASC' THEN CAST(FirstName AS SQL_VARIANT)
WHEN #SortColumn='LastName' AND #SortDirection='ASC' THEN CAST(LastName AS SQL_VARIANT)
ELSE NULL
END AS sort_1,
CASE
WHEN #SortColumn='FirstName' AND #SortDirection='DES' THEN CAST(FirstName AS SQL_VARIANT)
WHEN #SortColumn='LastName' AND #SortDirection='DES' THEN CAST(LastName AS SQL_VARIANT)
ELSE NULL
END AS sort_2
FROM (
SELECT
CustomerId AS ID,
FirstName,
LastName
FROM Customers
WHERE
FirstName LIKE '%a%'
) C
ORDER BY sort_1 ASC, sort_2 DESC, ID ASC;
SELECT
ID,
Customers.FirstName,
Customers.LastName
FROM #Page
INNER JOIN Customers ON
ID = Customers.CustomerId
WHERE ROW > #StartIndex AND ROW <= (#StartIndex + #PageSize)
ORDER BY ROW ASC
DROP TABLE #Page
I believe you'd need to perform a separate query to accomplish that unfortionately.
I was able to accomplish this at my previous position using some help from this page:
Paging in DotNet 2.0
They also have it pulling a row count seperately.
Here's what I do for paging: All of my big queries that need to be paged are coded as inserts into a temp table. The temp table has an identity field that will act in a similar manner to the row_number() mentioned above. I store the number of rows in the temp table in an output parameter so the calling code knows how many total records there are. The calling code also specifies which page it wants, and how many rows per page, which are selected out from the temp table.
The cool thing about doing it this way is that I also have an "Export" link that allows you to get all rows from the report returned as CSV above every grid in my application. This link uses the same stored procedure: you just return the contents of the temp table instead of doing the paging logic. This placates users who hate paging, and want to see everything, and want to sort it in a million different ways.
Related
I have the below view(table) in my database(SQL SERVER).
I want to retrieve 2 things from this table.
The object which has the latest booking date for each Product number.
It will return the objects = {0001, 2, 2019-06-06 10:39:58} and {0003, 2, 2019-06-07 12:39:58}.
If all the step number has no booking date for a Product number, it wil return the object with Step number = 1. It will return the object = {0002, 1, NULL}.
The view has 7.000.000 rows. I must do it by using native query.
The first query that retrieves the product with the latest booking date:
SELECT DISTINCT *
FROM TABLE t
WHERE t.BOOKING_DATE = (SELECT max(tbl.BOOKING_DATE) FROM TABLE tbl WHERE t.PRODUCT_NUMBER = tbl.PRODUCT_NUMBER)
The second query that retrieves the product with booking date NULL and Step number = 1;
SELECT DISTINCT *
FROM TABLE t
WHERE (SELECT max(tbl.BOOKING_DATE) FROM TABLE tbl WHERE t.PRODUCT_NUMBER = tbl.PRODUCT_NUMBER) IS NULL AND t.STEP_NUMBER = 1
I tried using a single query, but it takes too long.
For now I use 2 query for getting this information but for the future I need to improve this. Do you have an alternative? I also can not use stored procedure, function inside SQL SERVER. I must do it with native query from Java.
Try this,
Declare #p table(pumber int,step int,bookdate datetime)
insert into #p values
(1,1,'2019-01-01'),(1,2,'2019-01-02'),(1,3,'2019-01-03')
,(2,1,null),(2,2,null),(2,3,null)
,(3,1,null),(3,2,null),(3,3,'2019-01-03')
;With CTE as
(
select pumber,max(bookdate)bookdate
from #p p1
where bookdate is not null
group by pumber
)
select p.* from #p p
where exists(select 1 from CTE c
where p.pumber=c.pumber and p.bookdate=c.bookdate)
union all
select p1.* from #p p1
where p1.bookdate is null and step=1
and not exists(select 1 from CTE c
where p1.pumber=c.pumber)
If performance is main concern then 1 or 2 query do not matter,finally performance matter.
Create NonClustered index ix_Product on Product (ProductNumber,BookingDate,Stepnumber)
Go
If more than 90% of data are where BookingDate is not null or where BookingDate is null then you can create Filtered Index on it.
Create NonClustered index ix_Product on Product (ProductNumber,BookingDate,Stepnumber)
where BookingDate is not null
Go
Try row_number() with a proper ordering. Null values are treated as the lowest possible values by sql-server ORDER BY.
SELECT TOP(1) WITH TIES *
FROM myTable t
ORDER BY row_number() over(partition by PRODUCT_NUMBER order by BOOKING_DATE DESC, STEP_NUMBER);
Pay attention to sql-server adviced indexes to get good performance.
Possibly the most efficient method is a correlated subquery:
select t.*
from t
where t.step_number = (select top (1) t2.step_number
from t t2
where t2.product_number = t.product_number and
order by t2.booking_date desc, t2.step_number
);
In particular, this can take advantage of an index on (product_number, booking_date desc, step_number).
I need to update the following query so that it only returns one child record (remittance) per parent (claim).
Table Remit_To_Activate contains exactly one date/timestamp per claim, which is what I wanted.
But when I join the full Remittance table to it, since some claims have multiple remittances with the same date/timestamps, the outermost query returns more than 1 row per claim for those claim IDs.
SELECT * FROM REMITTANCE
WHERE BILLED_AMOUNT>0 AND ACTIVE=0
AND REMITTANCE_UUID IN (
SELECT REMITTANCE_UUID FROM Claims_Group2 G2
INNER JOIN Remit_To_Activate t ON (
(t.ClaimID = G2.CLAIM_ID) AND
(t.DATE_OF_LATEST_REGULAR_REMIT = G2.CREATE_DATETIME)
)
where ACTIVE=0 and BILLED_AMOUNT>0
)
I believe the problem would be resolved if I included REMITTANCE_UUID as a column in Remit_To_Activate. That's the REAL issue. This is how I created the Remit_To_Activate table (trying to get the most recent remittance for a claim):
SELECT MAX(create_datetime) as DATE_OF_LATEST_REMIT,
MAX(claim_id) AS ClaimID,
INTO Latest_Remit_To_Activate
FROM Claims_Group2
WHERE BILLED_AMOUNT>0
GROUP BY Claim_ID
ORDER BY Claim_ID
Claims_Group2 contains these fields:
REMITTANCE_UUID,
CLAIM_ID,
BILLED_AMOUNT,
CREATE_DATETIME
Here are the 2 rows that are currently giving me the problem--they're both remitts for the SAME CLAIM, with the SAME TIMESTAMP. I only want one of them in the Remits_To_Activate table, so only ONE remittance will be "activated" per Claim:
enter image description here
You can change your query like this:
SELECT
p.*, latest_remit.DATE_OF_LATEST_REMIT
FROM
Remittance AS p inner join
(SELECT MAX(create_datetime) as DATE_OF_LATEST_REMIT,
claim_id,
FROM Claims_Group2
WHERE BILLED_AMOUNT>0
GROUP BY Claim_ID
ORDER BY Claim_ID) as latest_remit
on latest_remit.claim_id = p.claim_id;
This will give you only one row. Untested (so please run and make changes).
Without having more information on the structure of your database -- especially the structure of Claims_Group2 and REMITTANCE, and the relationship between them, it's not really possible to advise you on how to introduce a remittance UUID into DATE_OF_LATEST_REMIT.
Since you are using SQL Server, however, it is possible to use a window function to introduce a synthetic means to choose among remittances having the same timestamp. For example, it looks like you could approach the problem something like this:
select *
from (
select
r.*,
row_number() over (partition by cg2.claim_id order by cg2.create_datetime desc) as rn
from
remittance r
join claims_group2 cg2
on r.remittance_uuid = cg2.remittance_uuid
where
r.active = 0
and r.billed_amount > 0
and cg2.active = 0
and cg2.billed_amount > 0
) t
where t.rn = 1
Note that that that does not depend on your DATE_OF_LATEST_REMIT table at all, it having been subsumed into the inline view. Note also that this will introduce one extra column into your results, though you could avoid that by enumerating the columns of table remittance in the outer select clause.
It also seems odd to be filtering on two sets of active and billed_amount columns, but that appears to follow from what you were doing in your original queries. In that vein, I urge you to check the results carefully, as lifting the filter conditions on cg2 columns up to the level of the join to remittance yields a result that may return rows that the original query did not (but never more than one per claim_id).
A co-worker offered me this elegant demonstration of a solution. I'd never used "over" or "partition" before. Works great! Thank you John and Gaurasvsa for your input.
if OBJECT_ID('tempdb..#t') is not null
drop table #t
select *, ROW_NUMBER() over (partition by CLAIM_ID order by CLAIM_ID) as ROW_NUM
into #t
from
(
select '2018-08-15 13:07:50.933' as CREATE_DATE, 1 as CLAIM_ID, NEWID() as
REMIT_UUID
union select '2018-08-15 13:07:50.933', 1, NEWID()
union select '2017-12-31 10:00:00.000', 2, NEWID()
) x
select *
from #t
order by CLAIM_ID, ROW_NUM
select CREATE_DATE, MAX(CLAIM_ID), MAX(REMIT_UUID)
from #t
where ROW_NUM = 1
group by CREATE_DATE
I'm having a bit of a weird question, given to me by a client.
He has a list of data, with a date between parentheses like so:
Foo (14/08/2012)
Bar (15/08/2012)
Bar (16/09/2012)
Xyz (20/10/2012)
However, he wants the list to be displayed as follows:
Foo (14/08/2012)
Bar (16/09/2012)
Bar (15/08/2012)
Foot (20/10/2012)
(notice that the second Bar has moved up one position)
So, the logic behind it is, that the list has to be sorted by date ascending, EXCEPT when two rows have the same name ('Bar'). If they have the same name, it must be sorted with the LATEST date at the top, while staying in the other sorting order.
Is this even remotely possible? I've experimented with a lot of ORDER BY clauses, but couldn't find the right one. Does anyone have an idea?
I should have specified that this data comes from a table in a sql server database (the Name and the date are in two different columns). So I'm looking for a SQL-query that can do the sorting I want.
(I've dumbed this example down quite a bit, so if you need more context, don't hesitate to ask)
This works, I think
declare #t table (data varchar(50), date datetime)
insert #t
values
('Foo','2012-08-14'),
('Bar','2012-08-15'),
('Bar','2012-09-16'),
('Xyz','2012-10-20')
select t.*
from #t t
inner join (select data, COUNT(*) cg, MAX(date) as mg from #t group by data) tc
on t.data = tc.data
order by case when cg>1 then mg else date end, date desc
produces
data date
---------- -----------------------
Foo 2012-08-14 00:00:00.000
Bar 2012-09-16 00:00:00.000
Bar 2012-08-15 00:00:00.000
Xyz 2012-10-20 00:00:00.000
A way with better performance than any of the other posted answers is to just do it entirely with an ORDER BY and not a JOIN or using CTE:
DECLARE #t TABLE (myData varchar(50), myDate datetime)
INSERT INTO #t VALUES
('Foo','2012-08-14'),
('Bar','2012-08-15'),
('Bar','2012-09-16'),
('Xyz','2012-10-20')
SELECT *
FROM #t t1
ORDER BY (SELECT MIN(t2.myDate) FROM #t t2 WHERE t2.myData = t1.myData), T1.myDate DESC
This does exactly what you request and will work with any indexes and much better with larger amounts of data than any of the other answers.
Additionally it's much more clear what you're actually trying to do here, rather than masking the real logic with the complexity of a join and checking the count of joined items.
This one uses analytic functions to perform the sort, it only requires one SELECT from your table.
The inner query finds gaps, where the name changes. These gaps are used to identify groups in the next query, and the outer query does the final sorting by these groups.
I have tried it here (SQL Fiddle) with extended test-data.
SELECT name, dat
FROM (
SELECT name, dat, SUM(gap) over(ORDER BY dat, name) AS grp
FROM (
SELECT name, dat,
CASE WHEN LAG(name) OVER (ORDER BY dat, name) = name THEN 0 ELSE 1 END AS gap
FROM t
) x
) y
ORDER BY grp, dat DESC
Extended test-data
('Bar','2012-08-12'),
('Bar','2012-08-11'),
('Foo','2012-08-14'),
('Bar','2012-08-15'),
('Bar','2012-08-16'),
('Bar','2012-09-17'),
('Xyz','2012-10-20')
Result
Bar 2012-08-12
Bar 2012-08-11
Foo 2012-08-14
Bar 2012-09-17
Bar 2012-08-16
Bar 2012-08-15
Xyz 2012-10-20
I think that this works, including the case I asked about in the comments:
declare #t table (data varchar(50), [date] datetime)
insert #t
values
('Foo','20120814'),
('Bar','20120815'),
('Bar','20120916'),
('Xyz','20121020')
; With OuterSort as (
select *,ROW_NUMBER() OVER (ORDER BY [date] asc) as rn from #t
)
--Now we need to find contiguous ranges of the same data value, and the min and max row number for such a range
, Islands as (
select data,rn as rnMin,rn as rnMax from OuterSort os where not exists (select * from OuterSort os2 where os2.data = os.data and os2.rn = os.rn - 1)
union all
select i.data,rnMin,os.rn
from
Islands i
inner join
OuterSort os
on
i.data = os.data and
i.rnMax = os.rn-1
), FullIslands as (
select
data,rnMin,MAX(rnMax) as rnMax
from Islands
group by data,rnMin
)
select
*
from
OuterSort os
inner join
FullIslands fi
on
os.rn between fi.rnMin and fi.rnMax
order by
fi.rnMin asc,os.rn desc
It works by first computing the initial ordering in the OuterSort CTE. Then, using two CTEs (Islands and FullIslands), we compute the parts of that ordering in which the same data value appears in adjacent rows. Having done that, we can compute the final ordering by any value that all adjacent values will have (such as the lowest row number of the "island" that they belong to), and then within an "island", we use the reverse of the originally computed sort order.
Note that this may, though, not be too efficient for large data sets. On the sample data it shows up as requiring 4 table scans of the base table, as well as a spool.
Try something like...
ORDER BY CASE date
WHEN '14/08/2012' THEN 1
WHEN '16/09/2012' THEN 2
WHEN '15/08/2012' THEN 3
WHEN '20/10/2012' THEN 4
END
In MySQL, you can do:
ORDER BY FIELD(date, '14/08/2012', '16/09/2012', '15/08/2012', '20/10/2012')
In Postgres, you can create a function FIELD and do:
CREATE OR REPLACE FUNCTION field(anyelement, anyarray) RETURNS numeric AS $$
SELECT
COALESCE((SELECT i
FROM generate_series(1, array_upper($2, 1)) gs(i)
WHERE $2[i] = $1),
0);
$$ LANGUAGE SQL STABLE
If you do not want to use the CASE, you can try to find an implementation of the FIELD function to SQL Server.
I'm querying a single table called PhoneCallNotes. The caller FirstName, LastName and DOB are recorded for each call as well as many other fields including a unique ID for the call (PhoneNoteID) but no unique ID for the caller.
My requirement is to return a list of callers with duplicates removed along with the PhoneNoteID, etc from their most recent entry.
I can get the list of users I want using a Group By on name, DOB and Max(CreatedOn) but how do I include uniqueID (of the most recent entry in the results?)
select O.CallerFName,O.CallerLName,O.CallerDOB,Max(O.CreatedOn)
from [dbo].[PhoneCallNotes] as O
where O.CallerLName like 'Public'
group by O.CallerFName,O.CallerLName,O.CallerDOB order by Max(O.CreatedOn)
Results:
John Public 4/4/2001 4/6/12 16:42
Joe Public 4/12/1988 4/6/12 16:52
John Public 1/2/1950 4/6/12 17:01
Thanks
You can also write what Andrey wrote somewhat more compactly if you select TOP (1) WITH TIES and put the ROW_NUMBER() expression in the ORDER BY clause:
SELECT TOP (1) WITH TIES
CallerFName,
CallerLName,
CallerDOB,
CreatedOn,
PhoneNoteID
FROM [dbo].[PhoneCallNotes]
WHERE CallerLName = 'Public'
ORDER BY ROW_NUMBER() OVER(
PARTITION BY CallerFName, CallerLName, CallerDOB
ORDER BY CreatedOn DESC
)
(By the way, there's no reason to use LIKE for a simple string comparison.)
Try something like that:
;WITH CTE AS (
SELECT
O.CallerFName,
O.CallerLName,
O.CallerDOB,
O.CreatedOn,
PhoneNoteID,
ROW_NUMBER() OVER(PARTITION BY O.CallerFName, O.CallerLName, O.CallerDOB ORDER BY O.CreatedOn DESC) AS rn
FROM [dbo].[PhoneCallNotes] AS O
WHERE
O.CallerLName LIKE 'Public'
)
SELECT
CallerFName,
CallerLName,
CallerDOB,
CreatedOn,
PhoneNoteID
FROM CTE
WHERE rn = 1
ORDER BY
CreatedOn
Assuming that the set of [FirstName, LastName, DateOfBirth] are unique (#shudder#), I believe the following should work, on pretty much every major RDBMS:
SELECT a.callerFName, a.callerLName, a.callerDOB, a.createdOn, a.phoneNoteId
FROM phoneCallNotes as a
LEFT JOIN phoneCallNotes as b
ON b.callerFName = a.callerFName
AND b.callerLName = a.callerLName
AND b.callerDOB = a.callerDOB
AND b.createdOn > a.createdOn
WHERE a.callerLName LIKE 'Public'
AND b.phoneNoteId IS NULL
Basically, the query is looking for every phone-call-note for a particular name/dob combination, where there is not a more-recent row (b is null). If you have two rows with the same create time, you'll get duplicate rows, though.
I have 2 tables that I'm trying to query. The first has a list of meters. The second, has the data for those meters. I want to get the newest reading for each meter.
Originally, this was in a group by statement, but it ended up processing all 7 million rows in our database, and took a little over a second. A subquery and a number of other ways of writing it had the same result.
I have a clustered index that covers the EndTime and the MeterDataConfigurationId columns in the MeterRecordings table.
Ultimately, this is what I wrote, which performs in about 20 milliseconds. It seems like SQL should be smart enough to perform the "group by" query in the same time.
Declare #Meters Table
(
MeterId Integer,
LastValue float,
LastTimestamp DateTime
)
Declare MeterCursor Cursor For
Select Id
From MeterDataConfiguration
Declare #MeterId Int
Open MeterCursor
Fetch Next From MeterCursor Into #MeterId
While ##FETCH_STATUS = 0
Begin
Declare #LastValue int
Declare #LastTimestamp DateTime
Select #LastValue = mr.DataValue, #LastTimestamp = mr.EndTime
From MeterRecording mr
Where mr.MeterDataConfigurationId = #MeterId
And mr.EndTime = (Select MAX(EndTime) from MeterRecording mr2 Where mr2.MeterDataConfigurationId = #MeterId)
Insert Into #Meters
Select #MeterId, #LastValue, #LastTimestamp
Fetch Next From MeterCursor Into #MeterId
End
Deallocate MeterCursor
Select *
From #Meters
Here is an example of the same query that performs horribly:
select mdc.id, mr.EndTime
from MeterDataConfiguration mdc
inner join MeterRecording mr on
mr.MeterDataConfigurationId = mdc.Id
and mr.EndTime = (select MAX(EndTime) from MeterRecording mr2 where MeterDataConfigurationId = mdc.Id)
You can try a CTE (Common Table Expression) using ROW_NUMBER:
;WITH Readings AS
(
SELECT
mdc.id, mr.EndTime,
ROW_NUMBER() OVER(PARTIION BY mdc.id ORDER BY mr.EndTime DESC) AS 'RowID'
FROM dbo.MeterDataConfiguration mdc
INNER JOIN dbo.MeterRecording mr ON mr.MeterDataConfigurationId = mdc.Id
)
SELECT
ID, EndTime, RowID
FROM
Readings
WHERE
RowID = 1
This creates "partitions" of data, one for each mdc.id, and numbers them sequentially, descending on mr.EndTime, so for each partition, you get the most recent reading as the RowID = 1 row.
Of course, to get decent performance, you need appropriate indices on:
mr.MeterDataConfigurationId since it's a foreign key into MeterDataConfiguration, right?
mr.EndTime since you do an ORDER BY on it
mdc.Id which I assume is a primary key, so it's indexed already
Update: sorry, I missed this tidbit:
I have a clustered index that covers
the EndTime and the
MeterDataConfigurationId columns in
the MeterRecordings table.
Quite honestly : I would toss that. Don't you have some other unique ID on the MeterRecordings table that would be suitable as a clustering index? An INT IDENTITY ID or something??
If you have a compound index on (EndTime, MeterDataConfigurationId), this won't be able to be used for both purposes - ordering on EndTime, and joining on MeterDataConfigurationId - one of them will not be doable - pity!
How does this query perform? This one gets all the data in MeterRecording ignoring the list in MeterDataConfiguration. If this is not safe to do so, that can be joined to this query to restrict the output.
SELECT Id, DataValue, EndTime
FROM (
select mr.MeterDataConfigurationId as Id,
mr.DataValue
mr.EndTime,
RANK() OVER(PARTITION BY mr.MeterDataConfigurationId
ORDER BY mr.EndTime DESC) as r
from MeterRecording mr) as M
WHERE M.r = 1
I would go with marc's answer, but if you ever need to use Cursors again(you should try to avoid them) and you need to process a lot of records, i would suggest that you create a temp table (or table variable) that has all the columns from the table you are reading plus an autogenerated identity field (IDENTITY(1,1)) and then just use a while loop to read from the table. Basically, increment an int variable (call it #id) inside the loop and do
select
#col1Value = column1,
#col2Value = column2, ...
from #temp_table
where id = #id
this is behaves just like a cursor, but i find this to be much faster.