I'm studing a series of issues related to performance of my application written in Java, which has about 100,000 hits per day and each visit on average from 5 to 10 readings/writings on the 2 principale database tables (divided equally) whose cardinality is for both between 1 and 3 million records (i access to DB via hibernate).
My two main tables store user information (about 60 columns of type varchar, integer and timestamptz) and another linked to the data to be displayed (with about 30 columns here mainly varchar, integer, timestamptz).
The main problem I encountered may have had a drop in performance of my site (let's talk about time loads over 5 seconds which obviously does not depend only on the database performance), is the use of FillFactor which is currently the default value of 100 (that it's used always when data not changing..).
Obviously fill factor it's same on index (there are 10 for each 2 tables of type btree)
Currently on my main tables I make
40% select operations
30% update operations
20% operations insert
10% delete operations.
My database is also made up of 40 other tables of minor importance (there is just others 3 with same cardinality of user).
My questions are:
How do you find the right value of the fill factor to be set ?
Which can be a checklist of tasks to be checked to improve the performance
of a database of this kind?
Database is on server dedicated (16GB Ram, 8 Core) and storage it's on SSD disk (data are backupped all days and moved on another storage)
You have likely hit the "knee" of your memory usage where the entire index of the heavily used tables no longer fits in shared memory, so disk I/O is slowing it down. Confirm by checking if disk I/O is higher than normal. If so, try increasing shared memory (shared_buffers), or if that's already maxed, adjust the system shared memory size or add more system memory so you can bump it higher. You'll also probably have to start adjusting temp buffers, work memory and maintenance memory, and WAL parameters like checkpoint_segments, etc.
There are some perf tuning hints on PostgreSQL.org, and Google is your friend.
Edit: (to address the first comment) The first symptom of not-enough-memory is a big drop in performance, everything else being the same. Changing the table fill factor is not going to make a difference if you hit a knee in memory usage, if anything it will make it worse w.r.t. load times (which I assume means "db reads") because row information will be expanded across more pages on disk with blank space in each page thus more disk I/O is needed for table scans. But fill factor less than 100% can help with UPDATE operations, but I've found adjusting WAL parameters can compensate most of the time when using indexes (unless you've already optimized those). Bottom line, you need to profile all the heavy queries using EXPLAIN to see what will help. But at first glance, I'm pretty certain this is a memory issue even with the database on an SSD. We're talking a lot of random reads and random writes and a lot of SSDs actually get worse than HDDs after a lot of random small writes.
Related
I have a table that has 124,387,133 rows each row has 59 columns and of those 59, 18 of the columns are TinyInt data type and all row values are either 0 or 1. Some of the TinyInt columns are used in indexes and some are not.
My question will it make a difference on query performance and table size if I change the tinyint to a bit?
In case you don't know, a bit uses less space to store information than a TinyInt (1 bit against 8 bits). So you would save space changing to bit, and in theory the performance should be better. Generally is hard to notice such performance improvement but with the amount of data you have, it might actually make a difference, I would test it in a backup copy.
Actually,it's good to use the right data type..below are the benefits i could see when you use bit data type
1.Buffer pool savings,page is read into memory from storage and less memory can be allocated
2.Index key size will be less,so more rows can fit into one page and there by less traversing
Also you can see storage space savings as immediate benefit
In theory yes, in practise the difference is going to be subtle, the 18 bit fields get byte packed and rounded up, so it changes to 3 bytes. Depending on nullability / any nullability change the storage cost again will change. Both types are held within the fixed width portion of the row. So you will drop from 18 bytes to 3 bytes for those fields - depending on the overall size of the row versus the page size you may squeeze an extra row onto the page. (The rows/page density is where the performance gain will show up primarily, if you are to gain)
This seems like a premature micro-optimization however, if you are suffering from bad performance, investigate that and gather the evidence supporting any changes. Making type changes on existing systems should be carefully considered, if you cause the need for a code change, which prompts a full regression test etc, the cost of the change rises dramatically - for very little end result. (The production change on a large dataset will also not be rapid, so you can factor in some downtime in the cost as well to make this change)
You would be saving about 15 bytes per record, for a total of 1.8 Gbytes.
You have 41 remaining fields. If I assume that those are 4-byte integers, then your current overall size is about 22 Gbytes. The overall savings is less than 10% -- and could be much less if the other fields are larger.
This does mean that a full table scan would be about 10% faster, so that gives you a sense of the performance gain and magnitude.
I believe that bit fields require an extra operation or two to mask the bits and read -- trivial overhead that is measured in nanoseconds these days -- but something to keep in mind.
The benefits of a smaller page size are that more records fit on a single page, so the table occupies less space in memory (assuming all is read in at once) and less space on disk. Smaller data does not always mean improved query performance. Here are two caveats:
If you are reading a single record, then the entire page needs to be read into cache. It is true that you are a bit less likely to get a cache miss with a warm cache, but overall reading a single record from a cold cache would be the same.
If you are reading the entire table, SQL Server actually reads pages in blocks and implements some look-ahead (also called read-ahead or prefetching). If you are doing complex processing, you might not even notice the additional I/O time, because I/O operations can run in parallel with computing.
For other operations such as deletes and updates, locking is sometimes done at the page level. In these cases, sparser pages can be associated with better performance.
I have an application which is using oracle database on 8 core machine and 16Gb RAM. The table have 15 columns and around 5700,000 rows. There are indexes on 5 columns which are frequently updated. When we put a load of 100 requests/second which is insert query and then there is some read and update operation going on every request loop , the CPU loads start increasing exponentially and reaches upto 25 and after that I start getting error
I/O Error : Socket read time out.
However when we perform same operations with indexes on single column then the load remains consistent to 4-5. Though Indexing on 5 columns and having machine of 8 core CPU and 16 Gb RAM , the load must not have that much difference.
It would be helpful if you could show DDL for the creation of the table and the creation of your five indexes. It would also be helpful to understand what each indexed column represents, what the distribution of that data looks like, how unique are the values, how frequently a column is updated vs being inserted, etc. The accuracy of answers depends greatly on the clarity of your question. There are knobs you can turn on index creation that might help your performance issues, but there is not enough information here to offer any assistance.
We have a very big database WriteDB, which store raw trading data and we use this table to fast writes. Then with sql scripts I import data from WriteDB into ReadDB in comparatively the same table, but extended with some extra values + relation added. Import script is like that:
TRUNCATE TABLE [ReadDB].[dbo].[Price]
GO
INSERT INTO [ReadDB].[dbo].[Price]
SELECT a.*, 0 as ValueUSD, 0 as ValueEUR
from [WriteDB].[dbo].[Price] a
JOIN [ReadDB].[dbo].[Companies] b ON a.QuoteId = b.QuoteID
So initially there is around 130 mil. rows in this table (~50GB). Each day some of them added, some of them changes, so right now we decide not over complicate logic and just re-import all data. The problem that for some reason with time this script works longer and longer, on the almost same amount of data. First run it's take ~1h, now it's already taken 3h
Also SQL Server after import work not well. After import (or during it) if I try to run different queries, even the simplest they often fail with timeout errors.
What is the reason of such bad behavior and how to fix this?
One theory is that your first 50GB dataset has filled available memory for caching. Upon truncating the table, your cache is now effectively empty. This alternating behavior makes effective use of the cache difficult and incurs a substantial number of cache misses / increased IO time.
Consider the following sequence of events:
You load your initial dataset into WriteDb. During the load operation, pages in WriteDb are cached. There's very little memory contention because there's only one copy of the dataset and sufficient memory.
You initially populate ReadDb. The pages required to populate ReadDb (the data in WriteDb) are already largely cached. Fewer reads are required from disk, and your IO time can be dedicated to writing the inserted data for ReadDb. (This is your fast first run.)
You load your second dataset into WriteDb. During the load operation, there is insufficient memory to cache both existing data in ReadDb and new data written to WriteDb. This memory contention leads to fewer pages of WriteDb cached.
You truncate ReadDb. This invalidates a substantial portion of your cache (i.e. the 50GB of ReadDb data that was cached).
You then attempt your second load of ReadDb. Here you have very little of WriteDb cached, so your IO time is split between reading pages of WriteDb (your query) and writing pages of ReadDb (your insert). (This is your slow second run.)
You could test this theory by comparing the SQL Server cache miss ratio during your first and second load operations.
Some ways to improve performance might be to:
Use separate disk arrays for ReadDb / WriteDb to increase parallel IO performance.
Increase the available cache (amount of server memory) to accomodate the combined size of ReadDb + WriteDb and minimize cache misses.
Minimize the impact of each load operation on existing cached pages by using a MERGE statement instead of dumping / loading 50GB of data at a time.
Currently I use a daily job to REORGANIZE 1000+ indexes with > 5% and < = 30% fragmentation and REBUILD indexes with > 30% fragmentation:
https://msdn.microsoft.com/en-us/library/ms189858.aspx
All indexes are rebuild with a fill factor of 80%, but based on my latest check, the fragmentation levels of 100+ indexes are unchanged. Most of them with a high fragmentation. I tried to play with the fill factor values in a test environment, but unfortunately can't simulate the production environment.
I'm wondering if finding the 'best' fill-factor for each individual index is a good idea?
[is] finding the 'best' fill-factor for each individual index is a good idea?
If the options are:
Keep the current global 80% FILLFACTOR
or
Find the best FILLFACTOR for each table
then absolutely YES, find the most appropriate value for each table. Of course, had there been an option for:
Put everything back to the default FILLFACTOR of 0 (same thing as 100) and apply a lower value--determined per table--to only those tables that should benefit from it
then I would have chosen #3 :-). Why? Because fragmentation and fillfactor can both be a bit complicated and tricky. And setting a globally low (80 is "low" given that the default is 100) value probably has a negative impact on a larger group of tables than the benefit you might be getting on the tables where it makes sense to have it.
Consider:
Fragmentation is one of several factors that influence performance: And this particular factor is a trade-off with the size of the table since it affects how many rows fit on a data page. The fewer rows on a data page means more pages need to be read from disk (not quick) to satisfy queries, and those pages will take up more memory (i.e. the Buffer Pool). In fact, there are a great many negative effects resulting from tables being larger than they should be, such as index maintenance / backup / restore / update stats / etc operations taking longer than they should.
Setting the fill factor too low on large tables means that the tables will be even larger. The increase in disk reads and size required in the Buffer Pool needs to be balanced with the types of operations against the table. Singleton operations aren't affected so much by fragmentation, so if that is by far the majority use case, then you can err on the side of reducing the number of data pages required by the table. If you have a lot of range operations then you might need to err on the side of less fragmentation.
Data access patterns: Is the table being mostly appended to? If INSERTs are happening at the end of the table only, then fragmentation can only really occur if updates are occurring that either increase the size of rows with variable-length datatypes, or if the row moves position due to a change in value of 1 or more key fields.
Also, deleting large amounts of rows can cause fragmentation. This happens when no rows are left on the data page. This is a situation where fragmentation not only cannot be mitigated by lowering the FILLFACTOR (even if all other conditions are favorable for lowering it), but would seem to actually be made worse by lowering it. If deletes occur frequently enough to leave empty data pages, then reducing the number of rows on those pages would increase the rate at which they become empty (i.e. between 3 data pages mostly filled with 500 rows each, and 5 data pages--with a lower FILLFACTOR--filled with only 300 rows each, deleting 700 rows will leave 1 empty data page in the first scenario but 2 empty data pages in the second scenario). And more empty data pages means more "unused" space.
Row size: A table with a row size of 100 bytes will have little "wasted" space due to trying to maintain a particular fillfactor. Meaning, if wanting to fill a page 80%, then a small row size will probably lead to actually filling the page 78% (as an example). But a row size of 3500 bytes will lead to only 1 row per page, which is really just under 50% used. And in the end, how many rows do you think need to be "reserved" for out of sequence inserts or rows that expand in size? A row size of 3500 bytes would only fit 1 more row on the page anyway so not much was really saved. A row size of 100 bytes on the other hand would reserve space for quite a few rows, and this is good, but only if it will be used.
Data distribution across the entire table: Meaning, let's say you have a table with 100 million rows. And let's also say that this table does allow for non-sequential inserts and/or updates that expand the size of the row. If the locations of the inserts or updates that could cause fragmentation are evenly distributed (or at least cover 50% of the table), then a lower FILLFACTOR could be useful. But, if the inserts and/or updates are confined to the most recent 5 million rows, then why reserve free space across the first 95 million rows when it will never be used? For example, if you have a table that is ordered on a DATETIME field, holds data for several years, and changes only occur in the most recent 2 months, then you might as well use 100%.
FILLFACTOR only applies when creating or rebuilding indexes: Newly created data pages (including those created from pages splits) will fill to 100% (or as close as it can get). Meaning, if you insert a lot of data such that several (or many) new datapages are created, and the inserts are done sequentially such that there is no fragmentation at the end of the inserts, but then somehow the rows are updated in such a way as to cause fragmentation, or maybe new inserts happen that are spread among the rows inserted a moment ago, then there is no way to prevent that fragmentation anyway (at least not without doing a REBUILD after every group of inserts, and that is just silly).
Hence, the situations that truly benefit from a lower (than the default 100%--expressed as 0) FILLFACTOR are far fewer than those that benefit from the default. So set them all back to 100 (or 0) and look for tables that fit the following profile:
Not small. This is very subjective, but I would think anything under 10,000 rows can be ignored (i.e. get the default)
Row size is under 1000 bytes (maybe even less than 1000?). If you are only reserving space for 1 or 2 rows then you are doing more harm than good.
Data access patterns that can cause fragmentation: non-sequential inserts, and updates that expand the size of the row or cause its location to move.
Be careful to consider how much of the fragmentation is being caused by deletes that leave empty data pages. This type of fragmentation is adversely affected by lowering the FILLFACTOR, so deletes should make up, at most, a small proportion of the fragmentation.
Data distribution that results in the fragmentation getting distributed somewhat evenly across the index instead of being confined to 40% or less of it
Keep in mind:
Like many (or most?) other optimizations, the effects are proportional to the scale of the system. Small systems won't see much of an effect, but the larger the tables get, the more noticeable proper vs improper settings become.
It is certainly possible that a system naturally behaves in such a way that the "optimal" FILL FACTOR for all tables somehow does end up being the same--whether 80% or some other value. I am not sure how probable it is that such a system exists, but it is certainly within the realm of possibilities.
I have a large table (~170 million rows, 2 nvarchar and 7 int columns) in SQL Server 2005 that is constantly being inserted into. Everything works ok with it from a performance perspective, but every once in a while I have to update a set of rows in the table which causes problems. It works fine if I update a small set of data, but if I have to update a set of 40,000 records or so it takes around 3 minutes and blocks on the table which causes problems since the inserts start failing.
If I just run a select to get back the data that needs to be updated I get back the 40k records in about 2 seconds. It's just the updates that take forever. This is reflected in the execution plan for the update where the clustered index update takes up 90% of the cost and the index seek and top operator to get the rows take up 10% of the cost. The column I'm updating is not part of any index key, so it's not like it reorganizing anything.
Does anyone have any ideas on how this could be sped up? My thought now is to write a service that will just see when these updates have to happen, pull back the records that have to be updated, and then loop through and update them one by one. This will satisfy my business needs but it's another module to maintain and I would love if I could fix this from just a DBA side of things.
Thanks for any thoughts!
Actually it might reorganise pages if you update the nvarchar columns.
Depending on what the update does to these columns they might cause the record to grow bigger than the space reserved for it before the update.
(See explanation now nvarchar is stored at http://www.databasejournal.com/features/mssql/physical-database-design-consideration.html.)
So say a record has a string of 20 characters saved in the nvarchar - this takes 20*2+2(2 for the pointer) bytes in space. This is written at the initial insert into your table (based on the index structure). SQL Server will only use as much space as your nvarchar really takes.
Now comes the update and inserts a string of 40 characters. And oops, the space for the record within your leaf structure of your index is suddenly too small. So off goes the record to a different physical place with a pointer in the old place pointing to the actual place of the updated record.
This then causes your index to go stale and because the whole physical structure requires changing you see a lot of index work going on behind the scenes. Very likely causing an exclusive table lock escalation.
Not sure how best to deal with this. Personally if possible I take an exclusive table lock, drop the index, do the updates, reindex. Because your updates sometimes cause the index to go stale this might be the fastest option. However this requires a maintenance window.
You should batch up your update into several updates (say 10000 at a time, TEST!) rather than one large one of 40k rows.
This way you will avoid a table lock, SQL Server will only take out 5000 locks (page or row) before esclating to a table lock and even this is not very predictable (memory pressure etc). Smaller updates made in this fasion will at least avoid concurrency issues you are experiencing.
You can batch the updates using a service or firehose cursor.
Read this for more info:
http://msdn.microsoft.com/en-us/library/ms184286.aspx
Hope this helps
Robert
The mos brute-force (and simplest) way is to have a basic service, as you mentioned. That has the advantage of being able to scale with the load on the server and/or the data load.
For example, if you have a set of updates that must happen ASAP, then you could turn up the batch size. Conversely, for less important updates, you could have the update "server" slow down if each update is taking "too long" to relieve some of the pressure on the DB.
This sort of "heartbeat" process is rather common in systems and can be very powerful in the right situations.
Its wired that your analyzer is saying it take time to update the clustered Index . Did the size of the data change when you update ? Seems like the varchar is driving the data to be re-organized which might need updates to index pointers(As KMB as already pointed out) . In that case you might want to increase the % free sizes on the data and the index pages so that the data and the index pages can grow without relinking/reallocation . Since update is an IO intensive operation ( unlike read , which can be buffered ) the performance also depends on several factors
1) Are your tables partitioned by data 2) Does the entire table lies in the same SAN disk ( Or is the SAN striped well ?) 3) How verbose is the transaction logging . Can the buffer size of the transaction loggin increased to support larger writes to the log to suport massive inserts ?
Its also important which API/Language are you using? e.g JDBC support a batch update feature which makes the updates a little bit efficient if you are doing multiple updates .