Indexes in Azure SQL Database - sql

I have an Azure SQL Database that has proved pretty successful so far. It's about 20 months old, no maintenance done... but it has handled a lot. Some tables have millions of rows, and when querying on columns that are indexed, query response times are acceptable when using the web application that talks to it.
However, I read conflicting advice on rebuilding indexes.
This guy says there is no point in doing it: http://beyondrelational.com/modules/2/blogs/76/posts/15290/index-fragmentation-in-sql-azure.aspx
This guy says go ahead rebuild:
https://alexandrebrisebois.wordpress.com/2013/02/06/dont-forget-about-index-maintenance-on-windows-azure-sql-database/
I have run some rebuild index statements on some of the smaller tables storing a few thousand rows. Some of the fragmentation would drop by about 1/2... then if I run it a second time, it might go down by about
these rebuilds ran in about 2-10 seconds depending on size of table...
Then I ran an index that had the following fragmentation on a table that has about 2 million rows:
PK__tmp_ms_x__CDEC17C03A4CDB46 55.2335782060565
PK__tmp_ms_x__CDEC17C03A4CDB46 0
IX_this_is_my_fk_index 15.7538877620014
It took 33 minutes.
The result was
PK__tmp_ms_x__CDEC17C03A4CDB46 0.01
PK__tmp_ms_x__CDEC17C03A4CDB46 0
IX_this_is_my_fk_index 0
Questions:
Query speeds have not really changed since doing the above. Is this normal?
Given that there are many things I have no control over in SQL Azure, does it even make sense to Rebuild indexes?
BTW: I am not and never have been a DBA... just a developer

Rebuilding indexes will matter if the indexes are actually being used. If the index isnt being used for the query youre running, then you wont see a difference. If its only lightly being used, you'll see a minor difference if you run stats. If its being heavily used, you should see a good performance increase, most of the time. The other thing to note with Microsoft SQL is that index fragmentation is sometimes irrelevant. Usually when I'm choosing whether or not to rebuild an index, im looking at the page count combined with fragmentation. If im running a query and i'm having performance issues, and im using the index, and the index has more than 16000 pages, and the index is more than 50% fragmented, ill rebuild it. If the table is small or if I can use the online option, i will just go ahead and rebuild all of them at the same time..
Specifically for Azure, my opinion is that if you are trying to improve performance, its still a good step to take because its so easy, even if you cant be sure of the results. Whether or not its a shared service and whether or not you can control the hardware layer, reviewing the index fragmentation and rebuilding the indexes are something you have access to, so why not make use of it?
So I guess the short answer is yes, in certain situations.
What I would suggest, rather than manually reviewing indexes and rebuilding them, is set up a nightly or weekly job that runs when your db is least active. Have it go through all the tables and rebuild the indexes. You can also give it a set running time if you have lots of tables, and then make it "stateful" (you can use a table to retain progress info) so it remembers where it left off and resumes at the next scheduled run.

Related

Different execution plan for the same query

I have two identical SQL Databases that contain nearly the same records in each of their tables. The only difference between them is that one lives on my local machine and the other one is in Azure. Yet, after investigating a performance issue I found out that the two databases produce different execution plans for some of the queries. To give you an example, here is a simple query that takes approximately 1 second to run.
select count(*) from Deposits
inner join households on households.id = deposits.HouseholdId
where CashierId = 'd89c8029-4808-4cea-b505-efd8279dc66d'
It is obvious that the inner join needs to be omitted as it doesn't contribute to the end result. Indeed, it is omitted on my local machine but this is not the fact for Azure. Here are two pictures visualizing the execution plans for my local machine and Azure, respectively.
Just to give you some background on what happened, everything worked perfectly until I scaled down my Azure database to Basic 5 DTUs. Afterwards, some queries became extremely slow and I had no idea why. I scaled up the Db instance again but I saw no improvement. Then I rebuilt the indexes and noticed that if I rebuild them in the correct order, the queries will once again start performing as expected. Yet, I have absolutely no idea why I need to rebuild them in some specific order and, even less, how to determine the correct order. Now, I have issues with virtually all queries related to the Deposits table. I tried rebuilding the indexes but I saw no improvement whatsoever. I suspect that there is something to do with the PK index but I am not quite sure. The table contains approximately 300k rows.
Your databases may have the same schemas and approximate number of records, but it's tough to make them identical. Are you sure your databases are identical?
SELECT SERVERPROPERTY(N'ProductVersion');
What about the hardware they are running on? CPU? Memory? Disks? I mean it's Azure, right? Hard to know what actual server hardware you are using. SQL Server's query optimizer will adjust for hardware differences. Additionally, even if the hardware and software were identical... simply the fact that databases are used differently can make them have differences in statistics. The first time you run a query, it is evaluated, and optimized using statistics. All subsequent calls of that query will use that initially cached query plan. Tables change over time, they get taller. The shape of the data changes, meaning an old cached query plan can eventually fall out of favor. Certain things can reshape the data and enact a change in statistics which in turn invalidate the query plan cache, such as such as rebuilding your indices. Try this. To force a fresh query plan on each statement, add a
OPTION (RECOMPILE)
statement to the bottom of your queries. Does that help or stabilize performance? Furthermore - this is a strech - but can I assume that you aren't using that exact same query over and over? It makes more sense that you haven't hardcoded that GUID and that we really have created a query plan for something that has #CashierID as parameter? If so, then your existing query plan could be victim of parameter sniffing, where the query plan you're pulling was optimized for some specific GUID and does poorly when you pass in anything else.
For more info about what that statement does, have a look here. For more understanding on why it's hard to have identical databases take a look here and here.
Good luck! Hope you can get it sorted.

SQL Server slow index after time

One index of one my databases has the strange behaviour of getting slower after time goes by.
Even though my maintenance plan includes a 'Rebuild index' step on all user databases. After a while it gets so slow that my entire application/server grids to a halt.
But when I do a manual rebuild on the particular index, the query time is brought down from minutes to half a second.
Why does the 'rebuild index' step of the maintenance plan seem to skip this index, and why does it work manually? (the maintenance plan runs correctly without errors, every night)
Fragmentation can make a big difference in indexing. You may need to drop the indexes altogether and re-create them again.

Is there a way to Simulate Online Index Rebuilding in SQL Server without upgrading to Enterprise?

We currently have a SQL Agent Job that runs once a week to identify highly fragment indexes and rebuild them. For certain large indexes on large tables, this ends up causing the system to timeout, as the index is unavailable during the rebuild.
We have identified a strategy that should significantly reduce the fragmentation that occurs, but that won't be implemented for some time, and it doesn't cover everything.
We checked in to upgrading to the Enterprise edition, which allows for online index rebuilding. However, the cost is prohibitive for us at this point.
The indexes don't really change that much, so we can assume that they are static, at least for the most part.
I did envision a way that we could perhaps simulate the online index rebuilding. It could work as follows
For each of the large indexes identified, run a script to:
Check the fragmentation and proceed if it exceeds a certain threshold.
Create a new index, entitled CurrentIndex_TEMP.
Initiate a rebuild on the index.
Remove the temporary index.
It seems that once the temporary index has been built, it would be possible to rebuild the other index without causing any downtime, since SQL Server would have another index that would then be available to use on queries that would have otherwise used the other query.
Iterating through this for each index would hopefully minimize the increase in overall index size, as each temporary index would be removed before any other temporary indexes were created.
This strategy would also retain the historical data on the indexes. I had originally considered a strategy of first renaming the current index, then creating it again with the original name, and then removing the index that had been renamed. This, however, would result in a loss of history.
So, my question...
Is this a feasible strategy? Are there any significant problems I may run into? I understand that this will take some manual oversight from time to time, but I'm willing to accept that at this point.
Thanks for the help.
Any offline index rebuild with lock the table so you don't gain anything by creating a duplicated index.
With great effort your can simulate online index rebuilds. You have to rebuild all indexes on the table at once.
Create a copy of the table T with identical schema ("T_new")
Rename T to T_old
Create a view T defined as select * from T_old and set up INSTEAD OF DML triggers which perform all DML on both T_old and T_new
In a background job copy over batches from T_old to T_new using the MERGE statement
Finally, after the copy is completed, perform some renaming and dropping to make T_new the new T
This requires insanely high effort and good testing. But you can realize pretty much arbitrary schema changes with this online.

Stored Procedure Execution Plan - Data Manipulation

I have a stored proc that processes a large amount of data (about 5m rows in this example). The performance varies wildly. I've had the process running in as little as 15 minutes and seen it run for as long as 4 hours.
For maintenance, and in order to verify that the logic and processing is correct, we have the SP broken up into sections:
TRUNCATE and populate a work table (indexed) we can verify later with automated testing tools.
Join several tables together (including some of these work tables) to product another work table
Repeat 1 and/or 2 until a final output is produced.
My concern is that this is a single SP and so gets an execution plan when it is first run (even WITH RECOMPILE). But at that time, the work tables (permanent tables in a Work schema) are empty.
I am concerned that, regardless of the indexing scheme, the execution plan will be poor.
I am considering breaking up the SP and calling separate SPs from within it so that they could take advantage of a re-evaluated execution plan after the data in the work tables is built. I have also seen reference to using EXEC to run dynamic SQL which, obviously might get a RECOMPILE also.
I'm still trying to get SHOWPLAN permissions, so I'm flying quite blind.
Are you able to determine whether there are any locking problems? Are you running the SP in sufficiently small transactions?
Breaking it up into subprocedures should have no benefit.
Somebody should be concerned about your productivity, working without basic optimization resources. That suggests there may be other possible unseen issues as well.
Grab the free copy of "Dissecting Execution Plan" in the link below and maybe you can pick up a tip or two from it that will give you some idea of what's really going on under the hood of your SP.
http://dbalink.wordpress.com/2008/08/08/dissecting-sql-server-execution-plans-free-ebook/
Are you sure that the variability you're seeing is caused by "bad" execution plans? This may be a cause, but there may be a number of other reasons:
"other" load on the db machine
when using different data, there may be "easy" and "hard" data
issues with having to allocate more memory/file storage
...
Have you tried running the SP with the same data a few times?
Also, in order to figure out what is causing the runtime/variability, I'd try to do some detailed measuring to pin the problem down to a specific section of the code. (Easiest way would be to insert some log calls at various points in the sp). Then try to explain why that section is slow (other than "5M rows ;-)) and figure out a way to make that faster.
For now, I think there are a few questions to answer before going down the "splitting up the sp" route.
You're right it is quite difficult for you to get a clear picture of what is happening behind the scenes until you can get the "actual" execution plans from several executions of your overall process.
One point to consider perhaps. Are your work tables physical of temporary tables? If they are physical you will get a performance gain by inserting new data into a new table without an index (i.e. a heap) which you can then build an index on after all the data has been inserted.
Also, what is the purpose of your process. It sounds like you are moving quite a bit of data around, in which case you may wish to consider the use of partitioning. You can switch in and out data to your main table with relative ease.
Hope what I have detailed is clear but please feel free to pose further questions.
Cheers, John
In several cases I've seen this level of diversity of execution times / query plans comes down to statistics. I would recommend some tests running update stats against the tables you are using just before the process is run. This will both force a re-evaluation of the execution plan by SQL and, I suspect, give you more consistent results. Additionally you may do well to see if the differences in execution time correlate with re-indexing jobs by your dbas. Perhaps you could also gather some index health statistics before each run.
If not, as other answerers have suggested, you are more likely suffering from locking and/or contention issues.
Good luck with it.
The only thing I can think that an execution plan would do wrong when there's no data is err on the side of using a table scan instead of an index, since table scans are super fast when the whole table will fit into memory. Are there other negatives you're actually observing or are sure are happening because there's no data when an execution plan is created?
You can force usage of indexes in your query...
Seems to me like you might be going down the wrong path.
Is this an infeed or outfeed of some sort or are you creating a report? If it is a feed, I would suggest that you change the process to use SSIS which should be able to move 5 million records very fast.

Performance Tuning PostgreSQL

Keep in mind that I am a rookie in the world of sql/databases.
I am inserting/updating thousands of objects every second. Those objects are actively being queried for at multiple second intervals.
What are some basic things I should do to performance tune my (postgres) database?
It's a broad topic, so here's lots of stuff for you to read up on.
EXPLAIN and EXPLAIN ANALYZE is extremely useful for understanding what's going on in your db-engine
Make sure relevant columns are indexed
Make sure irrelevant columns are not indexed (insert/update-performance can go down the drain if too many indexes must be updated)
Make sure your postgres.conf is tuned properly
Know what work_mem is, and how it affects your queries (mostly useful for larger queries)
Make sure your database is properly normalized
VACUUM for clearing out old data
ANALYZE for updating statistics (statistics target for amount of statistics)
Persistent connections (you could use a connection manager like pgpool or pgbouncer)
Understand how queries are constructed (joins, sub-selects, cursors)
Caching of data (i.e. memcached) is an option
And when you've exhausted those options: add more memory, faster disk-subsystem etc. Hardware matters, especially on larger datasets.
And of course, read all the other threads on postgres/databases. :)
First and foremost, read the official manual's Performance Tips.
Running EXPLAIN on all your queries and understanding its output will let you know if your queries are as fast as they could be, and if you should be adding indexes.
Once you've done that, I'd suggest reading over the Server Configuration part of the manual. There are many options which can be fine-tuned to further enhance performance. Make sure to understand the options you're setting though, since they could just as easily hinder performance if they're set incorrectly.
Remember that every time you change a query or an option, test and benchmark so that you know the effects of each change.
Actually there are some simple rules which will get you in most cases enough performance:
Indices are the first part. Primary keys are automatically indexed. I recommend to put indices on all foreign keys. Further put indices on all columns which are frequently queried, if there are heavily used queries on a table where more than one column is queried, put an index on those columns together.
Memory settings in your postgresql installation. Set following parameters higher:
.
shared_buffers, work_mem, maintenance_work_mem, temp_buffers
If it is a dedicated database machine you can easily set the first 3 of these to half the ram (just be carefull under linux with shared buffers, maybe you have to adjust the shmmax parameter), in any other cases it depends on how much ram you would like to give to postgresql.
http://www.postgresql.org/docs/8.3/interactive/runtime-config-resource.html
http://wiki.postgresql.org/wiki/Performance_Optimization
The absolute minimum I'll recommend is the EXPLAIN ANALYZE command. It will show a breakdown of subqueries, joins, et al., all the time showing the actual amount of time consumed in the operation. It will also alert you to sequential scans and other nasty trouble.
It is the best way to start.
Put fsync = off in your posgresql.conf, if you trust your filesystem, otherwise each postgresql operation will be imediately written to the disk (with fsync system call).
We have this option turned off on many production servers since quite 10 years, and we never had data corruptions.