Efficient paging in SQL Server 2000 using rowcount - sql-server-2000

I'm seeking solutions to do paging in sql server 2000 and from all the research done such far, most solutions on sql server 2000 platform use the same technique as illustrated here by Greg Hamilton on 4GuysfromRolla
The issue with his approach is that he's using the "employeeid" for sorting, which is an auto increment identity column. If I want to sort on some other columns that might have null values or non unique values, it will fall apart.
The only solution that works is the solution mentioned in that same article using a tempdb table by "Dave Griffiths".
My question is, anyone knows any other solutions that works without the need for tempdb and handles non unique or null column data?
Thanks
*Note:. It has to be a SQL Server 2000 solution. I need the lowest common denominator.

Related

Why manage your own Sql Server ID column?

I recently started a new job and I am perplexed as to why the tables were designed this way. (in many databases) Is there someone who can give me a logical explanation?
Each table has a primary key/Id field. Example: EmployeeId (Integer)
Then to get the next id we actually need to query and update a table that manages all the keys for every table.
SELECT NextId
FROM dbo.NextID
Where TableName = 'Employees'
This makes life difficult, as you can imagine. The person who designed this mess has left, and the others just buy into this is the way you do things.
Is there some design flaw in MS SQL Identity columns? I don't get it? Any ideas?
Thanks for your input
The features/limitations of IDENTITY columns make them useful for generating surrogate keys in many scenarios but they are not ideal for all purposes - such as creating "meaningful", managed and/or potentially updateable identifers usable by the business; or for data integration or replication. Microsoft introduced the SEQUENCE feature as a more flexible alternative to IDENTITY in SQL Server 2008. In code written for earlier versions where sequences weren't available it isn't unusual to see the kind of scheme that you have described.
My guess is the person wanted no gaps in the ID column therefore he/she implemented this unnecessary process of getting next available id.
Maybe your Application depends on sequential Ids, either way it is not the way to go, Your application should not be dependant on sequential values. and no doubt Identity value is the way to go for this kind of requirement.
Issue with Identity Column
Yes there was an active bug in Identity column in Sql Server 2012. Identity Column taking big jumps when creating new identity values. Still it should not matter.

SQL Server 2012 Query Performance

I will be starting a project soon using SQL Server 2012 where I will be required to provide real-time querying of database tables in excess of 4 billion records in 1 of the tables alone. I am fairly familiar with SQL Server (I have indexes on the relevant columns), but have never had to deal with databases so large before.
I have been looking into partitioning and am fairly confident at using it, however it is only available in the Enterprise version(?) for which the licenses are WAY too expensive. Column Store indexes also look promising, but as well as only being in Enterprise version, they also render your table read-only(??). Another option is to archive data as soon as it is not being used in live so that I keep as little data in the live tables as possible.
The main queries on the largest table will be on a NVARCHAR(50) column which contains an ID. Initial testing with 4 billion records using a query to pull a single record based on the ID is taking in excess of 5 mins even with indexing. So my question is (and sorry if it sounds naive!): can somebody please suggest a way to speed up the queries on this table that I haven't mentioned (and therefore don't know about)? Many thanks in advance.

What is the most efficient way to index in SQL Server on three nchar columns in this case?

Abstract:
I'm creating a new table in SQL Server 2012 as part of a business system design. The table has 3 columns (among others) with type of nvarchar of various sizes, my web app needs to query those 3 columns using a single string search term. This table could contain records no more than 100K.
I'd like to index these columns in SQL Server 2012 in a way so that it would be most efficient to get the results.
I'd like to emphasize that the questions that I'm about to ask pertains to my particular case rather than generic SQL index questions. Though the answers to them might be applied to generic questions as well.
Context
SQL Server 2012
Windows Server 2008
Table column definitions:
ItemNumber :: nvarchar(10)
Manufacturer :: nvarchar(20)
Description :: nvarchar(40)
Possible record count: up to 100K
Use Case:
An end user (one of 1000 or so) will pass a single string to search these three columns and the query needs to return all rows where any of these 3 columns contains the value of the string that's being searched (case insensitive).
The Questions:
What is the best way in to create index(s) so that the query would return the data in a most efficient manner (fast while minimizing SQL Server resource usage)?
Create index for each column?
Create one index with all 3 columns included?
Enable full text search on the index(s)?
What method would exploit the full potential of what SQL Server 2012 could offer?
As #marc_s already suggested, Full-text search is the only efficient option.

Is it possible to column partitioning in SQL Server

The size of each record of table is a performance parameter. that means if size of record was small SQL Server fetch more records in each read from physical hard.
In most of our queries we not use all column of table, and may be some column use only in specific query. Is it possible for we partitioning columns of each table to have better performance.
I use SQL Server 2008 R2.
Thank you.
True column level partitioning comes with column oriented storage, see Inside the SQL Server 2012 Columnstore Indexes, but that is available only in SQL Server 2012 and addresses specific BI workloads, not general SQL Server apps.
In row oriented storage the vertical partitioning is actually another name for designing proper covering indexes. If the engine has an alternative narrow index it will use it instead of the base table, when possible.
The last alternative, manually splinting the table and joining the vertical 'shards' in queries (or defining joining views, same thing) is usually ill advised and seldom pays off.
At the moment with SQL Server 2008, you cannot partition tables horizontally. If you have a large number of columns, you would need to chop it into horizontal chunk tables that share a common key and then skin them with an update-able view to give the illusion of one very wide table.
If there are just a few large columns (e.g. VARCHAR(1000)), you can normalize your data into unique value tables.
The one exception to the no column partitioning rule are character columns declared as max (varchar(max), for example).
These are stored on a separate data page. I believe this page is not read in unless the column is referenced in the query. If I am wrong, I'am sure more knowledge people will correct me.

SQL Server 2008 Slow Table, Table Partitioning

I have a table that has grown to over 1 million records... today (all valid)
I need to speed it up... would Table Partitioning be the answer? If so can i get some help on building the query?
The table has 4 bigint value keys and thats all with a primary key indexed and a index desc on userid the other values are at max 139 (there is just over 10,000 users now)
Any help or direction would be appreciated :)
You should investigate your indexes and query workload before thinking about partitioning. If you have done a large number of inserts, your clustered index may be fragmented.
Even though you are using SQL Server Express you can still profile using this free tool: Profiler for Microsoft SQL Server 2005/2008 Express Edition
you probably just need to tune your queries and/or indexes. 1 million records shouldn't be causing you problems. I have a table with several hundred million records & am able to maintain pretty high performance. I have found the SQL Server profiler to be pretty helpful with this stuff. It's available in SQL Server Management Studio (but not the express version, unfortunately). You can also do Query > Include Actual Execution Plan to see a diagram of where time is being spent during the query.
I agree with the other comments. With a reasonably small database (largest table 1MM records) it's unlikely that any activity in the database should provide a noticeable load if queries are optimized and the rest of the code isn't abusing the database with redundant queries. It's a good opportunity to get a feeling for the interplay between database queries and the rest of the code.
See my experiments on sql table partitioning here [http://faiz.kera.la/2009/08/02/does-partitioning-improve-performance-for-sql-tables/]. Hope this is helpful for you... And for your case, 1M is not a considerable figure. May be you need to fine tune the queries than going for partitioning.