Sql Server Paging issue - sql-server-2005

Friends,
I have already implemented paging in my SP -
with MyData As (
select ROW_NUMBER() over (order by somecolumn desc) AS [Row],
x,y,z,...
)
Select x,y,z,...
From MyData
Where [Row] between ((#currentPage - 1) * #pageSize + 1) and (#currentPage*#pageSize)
The problem here is that data is retried very fast if with clause return smaller number of rows but it takes long time when there are millions of records. Sometimes it times out.
Is there any other alternative?
Thanks for sharing your valuable time.

SQL server optimisation is a very broad subject and it is pretty much impossible to work out the issue with the limited amount of information you have posted. However since you're in a rush for a solution - Firstly I would suggest checking your actual execution plan, post it here, and making sure that the index is actually being used - if this is not the case then consider using the FASTFIRSTROW table hint to force the index to be used - check here and here on how it can improve things and here in how it can make things worse.
Next to consider is SQL parameter sniffing - it's unlikely from what you have said but possible check here for details enter link description here
For large scale performance gains you may need to look at architectural changes at the very least ensure that your transaction logs are on a different disk to your data.The reason you separate the database files from the log files is because database access is random and log access is sequential. Best practice dictates that you don't mix those two I/O types on the same disk
Also if you've got million of rows then you really need to consider splitting the data across multiple disks.
Finally I would strongly consider partioning either the table or the index see here for a start

The reason why your query is slow is that you have sort whole table on every request. To speed it up significantly you need to avoid sorting big chuck of data, at cost of CPU, HDD/Memory or limitations on pagination logic.
As there is not much information about how you table is sorted and if you insert in the middle / delete entries very often, I'll narrow down you question by making these assumptions:
I would imagine you have a table storing an archive of articles. New entries are mostly at the bottom of the table, entries from the middle of the table deleted rarely.
You sort always by the same column somecolumn and in the same order, e.g. descending.
You do not have any user entered filters (like article title or author).
This makes the table static in terms of the output: each article will be on the same place, unless a new one inserted. New one come to the top of your output. Then you can store ROW_NUMBER() OVER () as a column. A more convenient solution will be an IDENTITY column. It will speed up things if you create a clustered index on this column
alter table add [Record_Number] int null IDENTITY
This new column is added as null so you can populate values first time. Then you can make it not null.
On the other hand you can last row number very quickly by
select #Max_Row = SELECT MAX(row_number) from MyTable
Now when you have total number of rows, page size and page number you can select rows you need in one statement without sorting the whole lot.
Select * From MyTable
Where row_number between
(#Max_Row - #Page * #Page_Size) + 1 AND
#Max_Row -(#Page - 1) * #Page_Size
If you do have a filter in your CTE, then give some more information about how your data is structured, so we can think of a way to limit the scope of CTE.

Related

Should I generate massive amounts of SQL data on the client or in SQL Server?

I am writing a program to generate a massive (~1 billion records spread across ~20 tables) amount of data and populate tables in SQL Server. This is data that spans across multiple tables with potentially multiple foreign key constraints, as well as multiple 'enum' like tables, whose distribution of values need to be seemingly random as well and are referenced often from other tables. This leads to a lot of ORDER BY NEWID() type code, which seems slow to me.
My question is: which strategy would be more performant:
Generate and insert data in SQL Server, using set based operations and a bunch of ORDER BY NEWID() to get randomness
Generate all the data on a client (should make operations like choosing a random value from an enum table much faster), then import the data into SQL Server
I can see some positives and negatives from both strategies. Obviously the generation of random data would be easier and probably more performant in the client. However getting that data to the server would be slow. Otherwise, importing the data and inserting it in a set based operation should be similar in scale.
Has anyone done something similar?
ORDER BY NEWID(), as other members said can be extremely expensive operation.
There are other, faster ways of getting random data in SQL Server:
SELECT * FROM StackOverflow.dbo.Users TABLESAMPLE (.01 PERCENT);
or
DECLARE #row bigint=(
SELECT RAND(CHECKSUM(NEWID()))*SUM([rows]) FROM sys.partitions
WHERE index_id IN (0, 1) AND [object_id]=OBJECT_ID(‘dbo.thetable’));
SELECT *
FROM dbo.thetable
ORDER BY (SELECT NULL)
OFFSET #row ROWS FETCH NEXT 1 ROWS ONLY;
Credits to Brent Ozar and his recent blog post: https://www.brentozar.com/archive/2018/03/get-random-row-large-table/
I would choose generation of massive data amounts on RDBMS side..
You don't need to create billions of newid
Create one table with a million random and reference it a number of times. If you random repeats every million rows I suspect all would be OK.
Do a random stating point and increment. Use % on increment to cycle.
If you need values 0 - n again use a %.

SQL Table causes slow running Queries

I have a sql table that has approximately 6,000 records with 17 columns each. If I do a basic search on the table (i.e. Select * from table_Orders)... it takes 1.5 minutes to return all the records! Any query that I run using this table is also very slow. I have reindexed the table so fragmentation is not an issue. This table has 2 nvarchar(max) columns in it that is storing xml data. Returning the table without these columns is extremely fast (less than 1 second). So, i'm guessing it's the xml data that is bogging down the queries. Is there anything I can do to speed up performance of queries that utilize columns with xml in them? Any insight will be greatly appreciated. I don't typically work with xml within sql, so I don't even know where to start.
This sounds like network speed and NOT search times. Even a table scan of 6000 rows only takes a fraction of a second - to search all the rows. Returning those rows to a client though... you're downloading all that data, so you're going to see a difference when you retrieve a lot of it. This has nothing to do with "query performance" and there isn't anything you can do about it unless you can make the network faster or deliver less data.
You can test this by issuing queries searching for a key in your clustered index. Assuming you have a clustered index on RowID...
select RowId, NonXmlColumn where RowId = 3 -- or some other reasonable key
select RowId, XmlColumn where RowId = 3
The search time for those queries will be the same. So, any difference in speed can be attributed to the network.
I would not store that data in a table. I would not store in VARCHAR if I did. Not to sound like a jerk.
SQL Server has a xml datatype: http://msdn.microsoft.com/en-us/library/hh403385.aspx
It says there is limitations. You should see what applies to your scenario.
If you need to preserve the XML - stick it somewhere else and pull from it what ever fields you'll need to search on.

Organizing lots of timestamped values in a DB (sql / nosql)

I have a device I'm polling for lots of different fields, every x milliseconds
the device returns a list of ids and values which I need to store with a time stamp in a DB of sorts.
Users of the system need to be able to query this DB for historic logs to create graphs, or query the last timestamp for each value.
A simple approach would be to define a MySQL table with
id,value_id,timestamp,value
and let users select
Select value form t where value_id=x order by timestamp desc limit 1
and just push everything there with index on timestamp and id, But my question is what's the best approach performance / size wise for designing the schema? or using nosql? can anyone comment on possible design trade offs. Will such a design scale with millions of records?
When you say "... or query the last timestamp for each value" is this what you had in mind?
select max(timestamp) from T where value = ?
If you have millions of records, and the above is what you meant (i.e. value is alone in the WHERE clause), then you'd need an index on the value column, otherwise you'd have to do a full table scan. But if queries will ALWAYS have [timestamp] column in the WHERE clause, you do not need an index on [value] column if there's an index on timestamp.
You need an index on the timestamp column if your users will issue queries where the timestamp column appears alone in the WHERE clause:
select * from T where timestamp > x and timestamp < y
You could index all three columns, but you want to make sure the writes do not slow down because of the indexing overhead.
The rule of thumb when you have a very large database is that every query should be able to make use of an index, so you can avoid a full table scan.
EDIT:
Adding some additional remarks after your clarification.
I am wondering how you will know the id? Is [id] perhaps a product code?
A single simple index on id might not scale very well if there are not many different product codes, i.e. if it's a low-cardinality index. The rebalancing of the trees could slow down the batch inserts that are happening every x milliseconds. A composite index on (id,timestamp) would be better than a simple index.
If you rarely need to sort multiple products but are most often selecting based on a single product-code, then a non-traditional DBMS that uses a hashed-key sparse-table rather than a b-tree might be a very viable even a superior alternative for you. In such a database, all of the records for a given key would be found physically on the same set of contiguous "pages"; the hashing algorithm looks at the key and returns the page number where the record will be found. There is no need to rebalance an index as there isn't an index, and so you completely avoid the related scaling worries.
However, while hashed-file databases excel at low-overhead nearly instant retrieval based on a key value, they tend to be poor performers at sorting large groups of records on an attribute, because the data are not stored physically in any meaningful order, and gathering the records can involve much thrashing. In your case, timestamp would be that attribute. If I were in your shoes, I would base my decision on the cardinality of the id: in a dataset of a million records, how many DISTINCT ids would be found?
YET ANOTHER EDIT SINCE THE SITE IS NOT LETTING ME ADD ANOTHER ANSWER:
Simplest way is to have two tables, one with the ongoing history, which is always having new values inserted, and the other, containing only 250 records, one per part, where the latest value overwrites/replaces the previous one.
Update latest
set value = x
where id = ?
You have a choice of
indexes (composite; covering value_id, timestamp and value, or some combination of them): you should test performance with different indexes; composite and non-composite, also be aware that there are quite a few significantly different ways to get 'max per group' (search so, especially mysql version with variables)
triggers - you might use triggers to maintain max row values in another table (best performance of further selects; this is redundant and could be kept in memory)
lazy statistics/triggers, since your database is updated quite often you can save cycles if you update your statistics periodically (if you can allow the stats to be y seconds old and if you poll 1000 / x times a second, then you potentially save y * 100 / x potential updates; and this can be noticeable, especially in terms of scalability)
The above is true if you are looking for last bit of performance, if not keep it simple.

Appropriate query and indexes for a logging table in SQL

Assume a table named 'log', there are huge records in it.
The application usually retrieves data by simple SQL:
SELECT *
FROM log
WHERE logLevel=2 AND (creationData BETWEEN ? AND ?)
logLevel and creationData have indexes, but the number of records makes it take longer to retrieve data.
How do we fix this?
Look at your execution plan / "EXPLAIN PLAN" result - if you are retrieving large amounts of data then there is very little that you can do to improve performance - you could try changing your SELECT statement to only include columns you are interested in, however it won't change the number of logical reads that you are doing and so I suspect it will only have a neglible effect on performance.
If you are only retrieving small numbers of records then an index of LogLevel and an index on CreationDate should do the trick.
UPDATE: SQL server is mostly geared around querying small subsets of massive databases (e.g. returning a single customer record out of a database of millions). Its not really geared up for returning truly large data sets. If the amount of data that you are returning is genuinely large then there is only a certain amount that you will be able to do and so I'd have to ask:
What is it that you are actually trying to achieve?
If you are displaying log messages to a user, then they are only going to be interested in a small subset at a time, and so you might also want to look into efficient methods of paging SQL data - if you are only returning even say 500 or so records at a time it should still be very fast.
If you are trying to do some sort of statistical analysis then you might want to replicate your data into a data store more suited to statistical analysis. (Not sure what however, that isn't my area of expertise)
1: Never use Select *
2: make sure your indexes are correct, and your statistics are up-to-date
3: (Optional) If you find you're not looking at log data past a certain time (in my experience, if it happened more than a week ago, I'm probably not going to need the log for it) set up a job to archive that to some back-up, and then remove unused records. That will keep the table size down reducing the amount of time it takes search the table.
Depending on what kinda of SQL database you're using, you might look into Horizaontal Partitioning. Oftentimes, this can be done entirely on the database side of things so you won't need to change your code.
Do you need all columns? First step should be to select only those you actually need to retrieve.
Another aspect is what you do with the data after it arrives to your application (populate a data set/read it sequentially/?).
There can be some potential for improvement on the side of the processing application.
You should answer yourself these questions:
Do you need to hold all the returned data in memory at once? How much memory do you allocate per row on the retrieving side? How much memory do you need at once? Can you reuse some memory?
A couple of things
do you need all the columns, people usually do SELECT * because they are too lazy to list 5 columns of the 15 that the table has.
Get more RAM, themore RAM you have the more data can live in cache which is 1000 times faster than reading from disk
For me there are two things that you can do,
Partition the table horizontally based on the date column
Use the concept of pre-aggregation.
Pre-aggregation:
In preagg you would have a "logs" table, "logs_temp" table, a "logs_summary" table and a "logs_archive" table. The structure of logs and logs_temp table is identical. The flow of application would be in this way, all logs are logged in the logs table, then every hour a cron job runs that does the following things:
a. Copy the data from the logs table to "logs_temp" table and empty the logs table. This can be done using the Shadow Table trick.
b. Aggregate the logs for that particular hour from the logs_temp table
c. Save the aggregated results in the summary table
d. Copy the records from the logs_temp table to the logs_archive table and then empty the logs_temp table.
This way results are pre-aggregated in the summary table.
Whenever you wish to select the result, you would select it from the summary table.
This way the selects are very fast, because the number of records are far less as the data has been pre-aggregated per hour. You could even increase the threshold from an hour to a day. It all depends on your needs.
Now the inserts would be fast too, because the amount of data is not much in the logs table as it holds the data only for the last hour, so index regeneration on inserts would take very less time as compared to very large data-set hence making the inserts fast.
You can read more about Shadow Table trick here
I employed the pre-aggregation method in a news website built on wordpress. I had to develop a plugin for the news website that would show recently popular (popular during the last 3 days) news items, and there are like 100K hits per day, and this pre-aggregation thing has really helped us a lot. The query time came down from more than 2 secs to under a second. I intend on making the plugin publically available soon.
As per other answers, do not use 'select *' unless you really need all the fields.
logLevel and creationData have indexes
You need a single index with both values, what order you put them in will affect performance, but assuming you have a small number of possible loglevel values (and the data is not skewed) you'll get better performance putting creationData first.
Note that optimally an index will reduce the cost of a query to log(N) i.e. it will still get slower as the number of records increases.
C.
I really hope that by creationData you mean creationDate.
First of all, it is not enough to have indexes on logLevel and creationData. If you have 2 separate indexes, Oracle will only be able to use 1.
What you need is a single index on both fields:
CREATE INDEX i_log_1 ON log (creationData, logLevel);
Note that I put creationData first. This way, if you only put that field in the WHERE clause, it will still be able to use the index. (Filtering on just date seems more likely scenario that on just log level).
Then, make sure the table is populated with data (as much data as you will use in production) and refresh the statistics on the table.
If the table is large (at least few hundred thousand rows), use the following code to refresh the statistics:
DECLARE
l_ownname VARCHAR2(255) := 'owner'; -- Owner (schema) of table to analyze
l_tabname VARCHAR2(255) := 'log'; -- Table to analyze
l_estimate_percent NUMBER(3) := 5; -- Percentage of rows to estimate (NULL means compute)
BEGIN
dbms_stats.gather_table_stats (
ownname => l_ownname ,
tabname => l_tabname,
estimate_percent => l_estimate_percent,
method_opt => 'FOR ALL INDEXED COLUMNS',
cascade => TRUE
);
END;
Otherwise, if the table is small, use
ANALYZE TABLE log COMPUTE STATISTICS FOR ALL INDEXED COLUMNS;
Additionally, if the table grows large, you shoud consider to partition it by range on creationDate column. See these links for the details:
Oracle Documentation: Range Partitioning
OraFAQ: Range partitions
How to Create and Manage Partition Tables in Oracle

Performance of sql Count*

asp.net and sql server, have sqls for selecting a subset of rows, I need the count* frequently
Of course I can have a select count(*) for each of these sqls in each roundtrip but soon it will become too slow.
-How do you make it really fast?
Are you experiencing a problem that can't be solved by adding another index to your table? COUNT(*) operations are usually O(log n) in terms of total rows, and O(n) in terms of returned rows.
Edit: What I mean is (in case I misunderstood your question)
Given this structure:
CREATE TABLE emails (
id INT,
.... OTHER FIELDS
)
CREATE TABLE filters (
filter_id int,
filter_expression nvarchar(max) -- Or whatever...
)
Create the table
CREATE TABLE email_filter_matches (
filter int,
email int,
CONSTRAINT pk_email_filter_matches PRIMARY KEY(filter, email)
)
The data in this table would have to be updated every time a filter is updated, or when a new email is received.
Then, a query like
SELECT COUNT(*) FROM email_filter_matches WHERE filter = #filter_id
should be O(log n) with regard to total number of filter matches, and O(n) in regard to number of matches for this particular filter. Since your example shows only a small number of matches (which seems realistic when it comes to email filters), this could very well be OK.
If you really want to, of course you could create a trigger on the email_filter_matches table to keep a cached value in the filters table in sync, but that can be done the day you hit performance issues. It's not trivial to get these kinds of things right in concurrent systems.
Here are a few ideas for speeding up count(*) at the data tier:
Keep the table and the clustered index as narrow as possible, so that more rows fit per page
Keep the filtering criteria as simple as possible, so the counting goes fast
Do what you can to make sure the rows to be counted are in memory before you start to count them (perhaps using pre-caching)
Make sure your hardware is optimized (enough RAM, fast enough disks, etc)
Consider caching results in separate tables
As an alternative, if only the filters change frequently and not the data itself, you might consider building a cube using Analysis Services, and run your queries against that.