Optimal fill factor to prevent fragmentation - sql

Currently I use a daily job to REORGANIZE 1000+ indexes with > 5% and < = 30% fragmentation and REBUILD indexes with > 30% fragmentation:
https://msdn.microsoft.com/en-us/library/ms189858.aspx
All indexes are rebuild with a fill factor of 80%, but based on my latest check, the fragmentation levels of 100+ indexes are unchanged. Most of them with a high fragmentation. I tried to play with the fill factor values in a test environment, but unfortunately can't simulate the production environment.
I'm wondering if finding the 'best' fill-factor for each individual index is a good idea?

[is] finding the 'best' fill-factor for each individual index is a good idea?
If the options are:
Keep the current global 80% FILLFACTOR
or
Find the best FILLFACTOR for each table
then absolutely YES, find the most appropriate value for each table. Of course, had there been an option for:
Put everything back to the default FILLFACTOR of 0 (same thing as 100) and apply a lower value--determined per table--to only those tables that should benefit from it
then I would have chosen #3 :-). Why? Because fragmentation and fillfactor can both be a bit complicated and tricky. And setting a globally low (80 is "low" given that the default is 100) value probably has a negative impact on a larger group of tables than the benefit you might be getting on the tables where it makes sense to have it.
Consider:
Fragmentation is one of several factors that influence performance: And this particular factor is a trade-off with the size of the table since it affects how many rows fit on a data page. The fewer rows on a data page means more pages need to be read from disk (not quick) to satisfy queries, and those pages will take up more memory (i.e. the Buffer Pool). In fact, there are a great many negative effects resulting from tables being larger than they should be, such as index maintenance / backup / restore / update stats / etc operations taking longer than they should.
Setting the fill factor too low on large tables means that the tables will be even larger. The increase in disk reads and size required in the Buffer Pool needs to be balanced with the types of operations against the table. Singleton operations aren't affected so much by fragmentation, so if that is by far the majority use case, then you can err on the side of reducing the number of data pages required by the table. If you have a lot of range operations then you might need to err on the side of less fragmentation.
Data access patterns: Is the table being mostly appended to? If INSERTs are happening at the end of the table only, then fragmentation can only really occur if updates are occurring that either increase the size of rows with variable-length datatypes, or if the row moves position due to a change in value of 1 or more key fields.
Also, deleting large amounts of rows can cause fragmentation. This happens when no rows are left on the data page. This is a situation where fragmentation not only cannot be mitigated by lowering the FILLFACTOR (even if all other conditions are favorable for lowering it), but would seem to actually be made worse by lowering it. If deletes occur frequently enough to leave empty data pages, then reducing the number of rows on those pages would increase the rate at which they become empty (i.e. between 3 data pages mostly filled with 500 rows each, and 5 data pages--with a lower FILLFACTOR--filled with only 300 rows each, deleting 700 rows will leave 1 empty data page in the first scenario but 2 empty data pages in the second scenario). And more empty data pages means more "unused" space.
Row size: A table with a row size of 100 bytes will have little "wasted" space due to trying to maintain a particular fillfactor. Meaning, if wanting to fill a page 80%, then a small row size will probably lead to actually filling the page 78% (as an example). But a row size of 3500 bytes will lead to only 1 row per page, which is really just under 50% used. And in the end, how many rows do you think need to be "reserved" for out of sequence inserts or rows that expand in size? A row size of 3500 bytes would only fit 1 more row on the page anyway so not much was really saved. A row size of 100 bytes on the other hand would reserve space for quite a few rows, and this is good, but only if it will be used.
Data distribution across the entire table: Meaning, let's say you have a table with 100 million rows. And let's also say that this table does allow for non-sequential inserts and/or updates that expand the size of the row. If the locations of the inserts or updates that could cause fragmentation are evenly distributed (or at least cover 50% of the table), then a lower FILLFACTOR could be useful. But, if the inserts and/or updates are confined to the most recent 5 million rows, then why reserve free space across the first 95 million rows when it will never be used? For example, if you have a table that is ordered on a DATETIME field, holds data for several years, and changes only occur in the most recent 2 months, then you might as well use 100%.
FILLFACTOR only applies when creating or rebuilding indexes: Newly created data pages (including those created from pages splits) will fill to 100% (or as close as it can get). Meaning, if you insert a lot of data such that several (or many) new datapages are created, and the inserts are done sequentially such that there is no fragmentation at the end of the inserts, but then somehow the rows are updated in such a way as to cause fragmentation, or maybe new inserts happen that are spread among the rows inserted a moment ago, then there is no way to prevent that fragmentation anyway (at least not without doing a REBUILD after every group of inserts, and that is just silly).
Hence, the situations that truly benefit from a lower (than the default 100%--expressed as 0) FILLFACTOR are far fewer than those that benefit from the default. So set them all back to 100 (or 0) and look for tables that fit the following profile:
Not small. This is very subjective, but I would think anything under 10,000 rows can be ignored (i.e. get the default)
Row size is under 1000 bytes (maybe even less than 1000?). If you are only reserving space for 1 or 2 rows then you are doing more harm than good.
Data access patterns that can cause fragmentation: non-sequential inserts, and updates that expand the size of the row or cause its location to move.
Be careful to consider how much of the fragmentation is being caused by deletes that leave empty data pages. This type of fragmentation is adversely affected by lowering the FILLFACTOR, so deletes should make up, at most, a small proportion of the fragmentation.
Data distribution that results in the fragmentation getting distributed somewhat evenly across the index instead of being confined to 40% or less of it
Keep in mind:
Like many (or most?) other optimizations, the effects are proportional to the scale of the system. Small systems won't see much of an effect, but the larger the tables get, the more noticeable proper vs improper settings become.
It is certainly possible that a system naturally behaves in such a way that the "optimal" FILL FACTOR for all tables somehow does end up being the same--whether 80% or some other value. I am not sure how probable it is that such a system exists, but it is certainly within the realm of possibilities.

Related

Does it make a difference in SQL Server whether to use a TinyInt or Bit? Both in size and query performance

I have a table that has 124,387,133 rows each row has 59 columns and of those 59, 18 of the columns are TinyInt data type and all row values are either 0 or 1. Some of the TinyInt columns are used in indexes and some are not.
My question will it make a difference on query performance and table size if I change the tinyint to a bit?
In case you don't know, a bit uses less space to store information than a TinyInt (1 bit against 8 bits). So you would save space changing to bit, and in theory the performance should be better. Generally is hard to notice such performance improvement but with the amount of data you have, it might actually make a difference, I would test it in a backup copy.
Actually,it's good to use the right data type..below are the benefits i could see when you use bit data type
1.Buffer pool savings,page is read into memory from storage and less memory can be allocated
2.Index key size will be less,so more rows can fit into one page and there by less traversing
Also you can see storage space savings as immediate benefit
In theory yes, in practise the difference is going to be subtle, the 18 bit fields get byte packed and rounded up, so it changes to 3 bytes. Depending on nullability / any nullability change the storage cost again will change. Both types are held within the fixed width portion of the row. So you will drop from 18 bytes to 3 bytes for those fields - depending on the overall size of the row versus the page size you may squeeze an extra row onto the page. (The rows/page density is where the performance gain will show up primarily, if you are to gain)
This seems like a premature micro-optimization however, if you are suffering from bad performance, investigate that and gather the evidence supporting any changes. Making type changes on existing systems should be carefully considered, if you cause the need for a code change, which prompts a full regression test etc, the cost of the change rises dramatically - for very little end result. (The production change on a large dataset will also not be rapid, so you can factor in some downtime in the cost as well to make this change)
You would be saving about 15 bytes per record, for a total of 1.8 Gbytes.
You have 41 remaining fields. If I assume that those are 4-byte integers, then your current overall size is about 22 Gbytes. The overall savings is less than 10% -- and could be much less if the other fields are larger.
This does mean that a full table scan would be about 10% faster, so that gives you a sense of the performance gain and magnitude.
I believe that bit fields require an extra operation or two to mask the bits and read -- trivial overhead that is measured in nanoseconds these days -- but something to keep in mind.
The benefits of a smaller page size are that more records fit on a single page, so the table occupies less space in memory (assuming all is read in at once) and less space on disk. Smaller data does not always mean improved query performance. Here are two caveats:
If you are reading a single record, then the entire page needs to be read into cache. It is true that you are a bit less likely to get a cache miss with a warm cache, but overall reading a single record from a cold cache would be the same.
If you are reading the entire table, SQL Server actually reads pages in blocks and implements some look-ahead (also called read-ahead or prefetching). If you are doing complex processing, you might not even notice the additional I/O time, because I/O operations can run in parallel with computing.
For other operations such as deletes and updates, locking is sometimes done at the page level. In these cases, sparser pages can be associated with better performance.

Why is the transaction log growing so large?

I'm performing an update on a DB that is inserting a 15 digit number into 270,000,000 rows of a single column. I think the space required should be around 4GB but it is still running and the transaction log has just hit 180GB.
Transactions have to store a lot of information just in case the changes need to be rolled back.
There needs to be a sequential value to know which order the records were updated/inserted. It needs to store the original value for the column (some RDBMSs might even store the whole row!). It needs a unique identifier to tie the data back to the row's location.
It has to store so much data because if something catastrophic happens -- like the database crashing -- it needs to be able to return to a consistent state.
Yes, 15 digits * 270 mil may come out to 4 GB, but that completely ignores all of the very important metadata required.
If this is a one-off update that doesn't need to be repeated, it may be faster to simply recreate the table with the column updated. Compared to inserts/updates/deletes, table creates from selects require almost no transaction logging.
Probably, all pages split due to the significant amount of data added (4/180 = 2.2%; might not seem significant but probably pushes many pages over the edge).
Rebuild the clustered index with a fillfactor (probably 90 is enough). Then, you will not have any page splits when updating.
If this does not help we need to dig deeper.
In any case there will be significant log growth and it will be bigger than 4GB for sure. 180 sounds too much. That sounds like whole pages are stored.

Postgres performance improvement and checklist

I'm studing a series of issues related to performance of my application written in Java, which has about 100,000 hits per day and each visit on average from 5 to 10 readings/writings on the 2 principale database tables (divided equally) whose cardinality is for both between 1 and 3 million records (i access to DB via hibernate).
My two main tables store user information (about 60 columns of type varchar, integer and timestamptz) and another linked to the data to be displayed (with about 30 columns here mainly varchar, integer, timestamptz).
The main problem I encountered may have had a drop in performance of my site (let's talk about time loads over 5 seconds which obviously does not depend only on the database performance), is the use of FillFactor which is currently the default value of 100 (that it's used always when data not changing..).
Obviously fill factor it's same on index (there are 10 for each 2 tables of type btree)
Currently on my main tables I make
40% select operations
30% update operations
20% operations insert
10% delete operations.
My database is also made ​​up of 40 other tables of minor importance (there is just others 3 with same cardinality of user).
My questions are:
How do you find the right value of the fill factor to be set ?
Which can be a checklist of tasks to be checked to improve the performance
of a database of this kind?
Database is on server dedicated (16GB Ram, 8 Core) and storage it's on SSD disk (data are backupped all days and moved on another storage)
You have likely hit the "knee" of your memory usage where the entire index of the heavily used tables no longer fits in shared memory, so disk I/O is slowing it down. Confirm by checking if disk I/O is higher than normal. If so, try increasing shared memory (shared_buffers), or if that's already maxed, adjust the system shared memory size or add more system memory so you can bump it higher. You'll also probably have to start adjusting temp buffers, work memory and maintenance memory, and WAL parameters like checkpoint_segments, etc.
There are some perf tuning hints on PostgreSQL.org, and Google is your friend.
Edit: (to address the first comment) The first symptom of not-enough-memory is a big drop in performance, everything else being the same. Changing the table fill factor is not going to make a difference if you hit a knee in memory usage, if anything it will make it worse w.r.t. load times (which I assume means "db reads") because row information will be expanded across more pages on disk with blank space in each page thus more disk I/O is needed for table scans. But fill factor less than 100% can help with UPDATE operations, but I've found adjusting WAL parameters can compensate most of the time when using indexes (unless you've already optimized those). Bottom line, you need to profile all the heavy queries using EXPLAIN to see what will help. But at first glance, I'm pretty certain this is a memory issue even with the database on an SSD. We're talking a lot of random reads and random writes and a lot of SSDs actually get worse than HDDs after a lot of random small writes.

Table with no more than 30k records needs index rebuilding after a handful of inserts

I have a table with 20 or so columns. I have approximately 7 non-clustered indexes in that table on the columns that users filter by more often. The active records (those that the users see on their screen) are no more than 700-800. Twice a day a batch job runs and inserts a few records in that table - maybe 30 - 100 - and may update the existing ones as well.
I have noticed that the indexes need rebuilding EVERY time that the batch operation completes. Their fragmentation level doesnt go from 0-1% step by step to say 50%. I have noticed that they go from 0-1% to approx. 99% after the batch operation completes. A zillion of selects can happen on this table between batch operations but i dont think that matters.
Is this normal? i dont think it is. what do you think its the problem here? The indexed columns are mostly strings and floats.
A few changes could easily change fragmentation levels.
An insert on a page can cause a page split
Rows can overflow
Rows can be moved (forward pointers)
You'll have quite wide rows too so your data density (rows per page) is lower. DML on existing rows will cause fragmentation quite quickly if the DML is distributed across many pages

Performance of returning entire tables containing blog text as opposed to selecting specific columns

I think this is a pretty common scenario: I have a webpage that's returning links and excerpts to the 10 most recent blog entries.
If I just queried the entire table, I could use my ORM mapped object, but I'd be downloading all the blog text.
If I restricted the query to just the columns that I need, I'd be defining another class that'll hold just those required fields.
How bad is the performance hit if I were to query entire rows? Is it worth selecting just what I need?
The answer is "it depends".
There are two things that affect performance as far as column selection:
Are there covering indexes? E.g. if there is an index containing ALL of the columns in the smaller query, then a smaller column set would be extremely benefifical performance wise, since the index would be read without reading any rows themselves.
Size of columns. Basically, count how big the size of the entire row is, vs. size of only the columns in smaller query.
If the ratio is significant (e.g. full row is 3x bigger), then you might have significant savings in both IO (for retrieval) and network (for transmission) cost.
If the ratio is more like 10% benefit, it might not be worth it as far as DB performance gain.
It depends, but it will never be as efficient as returning only the columns you need (obviously). If there are few rows and the row sizes are small, then network bandwidth won't be affected too badly.
But, returning only the columns you need increases the chance that there is a covering index that can be used to satisfy the query, and that can make a big difference in the time a query takes to execute.
,Since you specify that it's for 10 records, the answer changes from "It Depends" to "Don't spend even a second worrying about this".
Unless your server is in another country on a dialup connection, wire time for 10 records will be zero, regardless of how many bytes you shave off each row. It's simply not something worth optimizing for.
So for this case, you get to set your ORM free to grab you those records in the least efficient manner it can come up with. If your situation changes, and you suddenly need more than, say, 1000 records at once, then you can come back and we'll make fun of you for not specifying columns, but for now you get a free pass.
For extra credit, once you start issuing this homepage query more than 10x per second, you can add caching on the server to avoid repeatedly hitting the database. That'll get you a lot more bang for your buck than optimizing the query.