We have a serverless SQL Server Database deployed in Azure and we get a lot if Index recommendations. All of them are with High estimated impact but all of them get reverted.
Another strange this is that DTU regression (overall), DTU regression (affected queries), Queries with improved performance and Queries with regressed performance in the validation report are all zeroes for all recommendations (over 20 and counting).
Is it normal that none are applied after the validation stage?
Azure SQL Database has an intelligent automatic tuning mechanism that takes care of your indexes. If you cannot identify and monitor your indexes, you can let the Azure SQL database do index management for you, tune your database, and ensure that your data structures dynamically adapt to your workload.
Azure SQL Database monitors the query performance after an index is created using automatic tuning. The automatic indexing uses data from the missing index DMV, and it monitors the recommendation over time using the query store. If it does not detect any performance improvement, it automatically reverts the recommendations.Validation process an take up to 72 hours.
In addition, in case the index fails or is postponed due to high CPU, IO, low Storage – once the resource utilization is normal and the automatic tuning believes the index would be beneficial, the process (creation of index) will start again.
This is how it works.
Related
Before spinning up an actual (MySQL, Postgres, etc) database, are there ways to estimate how many reads & writes per second the database can handle?
I'm assuming this is dependant on the CPU and memory (+ network if we're sharding), but is there a good best practice on how to put these variables together?
This is useful for estimating cost and understanding how much of a traffic spike can the db handle.
You can learn from others to gauge transactions per second you'll get from certain instances. For example, https://aiven.io/blog/postgresql-12-gcp-aws-performance gives you a good idea of how PostgreSQL 12 performs.
Percona has blogged about performance benchmarks also: https://www.percona.com/blog/2017/01/06/millions-queries-per-second-postgresql-and-mysql-peaceful-battle-at-modern-demanding-workloads/
Here's another benchmark with useful information: http://dimitrik.free.fr/blog/posts/mysql-performance-80-and-sysbench-oltp_rw-updatenokey.html about MySQL 8.0 and links to 5.7 performance.
There are several blogs about SQL Server performance such as https://storagehub.vmware.com/t/microsoft-sql-server-2017-database-on-vmware-vsan-tm-6-7-using-vmware-cloud-foundation-tm/performance-test-results/ that can also help you recognize the workloads these databases can handle.
Under 10K tps shouldn't be much of a problem with modern hardware. You can start with a most common configuration on the cloud or a standard sized server in your own environment. Use SSDs. Optimize your server settings to gain more speed and be ready to add more resources gradually. As Gordon mentions, benchmark your database after you have installed it. I'd start with 32G memory, 8 cores and SSDs to pull 10K tps as a thumbrule and adjust from there.
As you assumed, a lot depends on the # and type of CPU/memory/SSD, your workload, how you structure data, latency between your app and database, reporting happening against the database, master/slave configuration, types of transactions, storage engines etc.
I have a stored procedure in AZURE DW which runs very slow. I copied all the tables and the sp to a different server and there it is taking very less time to execute. I have created the tables using HASH distribution on the unique field but then also the sp is running very slow. Please advice how can I improve the performance of the sp in AZURE DW.
From your latest comment, the data sample is way too small for any reasonable tests on SQL DW. Remember SQL DW is MPP while your local on-premises SQL Server is SMP. Even with DWU100, the underlying layout of this MPP architecture is very different from your local SQL Server. For instance, every SQL DW has 60 user databases powering the DW and data is spread across them. Default storage is clustered column store which is optimized for common DW type workloads.
When a query is sent to DW, it has to build a distributed query plan that is pushed to the underlying DBs to build a local plan then executes and runs it back up the stack. This seems like a lot and it is for small data sets and simple queries. However, when you are dealing with hundreds of TBs of data with billions of rows and you need to run complex aggregations, this additional overhead is relatively tiny. The benefits you get from the MPP processing power makes that inconsequential.
There's no hard number on the actual size where you'll see real gains but at least half a TB is a good starting point and rows really should be in the tens of millions. Of course, there are always edge cases where your data set might not be huge but the workload naturally lends itself to MPP so you might still see gains but that's not common. If your data size is in the tens or low hundreds of GB range and won't grow significantly, you're likely to be much happier with Azure SQL Database.
As for resource class management, workload monitoring, etc... check out the following links:
https://learn.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-develop-concurrency
https://learn.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-manage-monitor
https://learn.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-best-practices
I'm hoping to catch the eye of someone with experience in both SQL Server and DB2. I thought I'd ask to see if anyone could comment on these from the top of their head. The following is a list of features with SQL Server, that I'd like to do with DB2 as well.
Configuration option "optimize for ad hoc workloads", which saves first-time query plans as stubs, to avoid memory pressure from heavy-duty one-time queries (especially helpful with an extreme number of parameterized queries). What - if any - is the equivalent for this with DB2?
On a similar note, what would be the equivalents for SQL Server configuration options auto create statistics, auto update statistics and auto update statistics async. Which all are fundamental for creating and maintaining proper statistics without causing too much overhead during business hours?
Indexes. MSSQL standard for index maintenance is REORGANIZE when fragmentation is between 5 - 35%, REBUILD (technically identical to DROP & RECREATE) when over 35%. As importantly, MSSQL supports ONLINE index rebuilds which keeps the associated data accessible by read / write operations. Anything similar with DB2?
Statistics. In SQL Server the standard statistics update procedure is all but useless in larger DB's, as the sample ratio is far too low. Is there an equivalent to UPDATE STATISTICS X WITH FULLSCAN in DB2, or a similarly functioning consideration?
In MSSQL, REBUILD index operations also fully recreate the underlying statistics, which is important to consider with maintenance operations in order to avoid overlapping statistics maintenance. The best method for statistics updates in larger DB's also involves targeting them on a per-statistic basis, since full table statistics maintenance can be extremely heavy when for example only a few of the dozens of statistics on a table actually need to be updated. How would this relate to DB2?
Show execution plan is an invaluable tool for analyzing specific queries and potential index / statistic issues with SQL Server. What would be the best similar method to use with DB2 (Explain tools? Or something else)?
Finding the bottlenecks: SQL Server has system views such as sys.dm_exec_query_stats and sys.dm_exec_sql_text, which make it extremely easy to see the most run, and most resource-intensive (number of logical reads, for instance) queries that need tuning, or proper indexing. Is there an equivalent query in DB2 you can use to instantly recognize problems in a clear and easy to understand manner?
All these questions represent a big chunk of where many of the problems are with SQL Server databases. I'd like to take that know-how, and translate it to DB2.
I'm assuming this is about DB2 for Linux, Unix and Windows.
Configuration option "optimize for ad hoc workloads", which saves first-time query plans as stubs, to avoid memory pressure from heavy-duty one-time queries (especially helpful with an extreme number of parameterized queries). What - if any - is the equivalent for this with DB2?
There is no equivalent; DB2 will evict least recently used plans from the package cache. One can enable automatic memory management for the package cache, where DB2 will grow and shrink it on demand (taking into account other memory consumers of course).
what would be the equivalents for SQL Server configuration options auto create statistics, auto update statistics and auto update statistics async.
Database configuration parameters auto_runstats and auto_stmt_stats
MSSQL standard for index maintenance is REORGANIZE when fragmentation is between 5 - 35%, REBUILD (technically identical to DROP & RECREATE) when over 35%. As importantly, MSSQL supports ONLINE index rebuilds
You have an option of automatic table reorganization (which includes indexes); the trigger threshold is not documented. Additionally you have a REORGCHK utility that calculates and prints a number of statistics that allow you to decide what tables/indexes you want to reorganize manually. Both table and index reorganization can be performed online with read-only or full access.
Is there an equivalent to UPDATE STATISTICS X WITH FULLSCAN in DB2, or a similarly functioning consideration? ... The best method for statistics updates in larger DB's also involves targeting them on a per-statistic basis, since full table statistics maintenance can be extremely heavy when for example only a few of the dozens of statistics on a table actually need to be updated.
You can configure automatic statistics collection to use sampling or not (configuration parameter auto_sampling). When updating statistics manually using the RUNSTATS utility you have full control over the sample size and what statistics to collect.
Show execution plan is an invaluable tool for analyzing specific queries and potential index / statistic issues with SQL Server. What would be the best similar method to use with DB2
You have both GUI (Data Studio, Data Server Manager) and command-line (db2expln, db2exfmt) tools to generate query plans, including plans for statements that are in the package cache or are currently executing.
Finding the bottlenecks: SQL Server has system views such as sys.dm_exec_query_stats and sys.dm_exec_sql_text, which make it extremely easy to see the most run, and most resource-intensive (number of logical reads, for instance) queries that need tuning
There is an extensive set of monitor procedures, views and table functions, e.g. MONREPORT.DBSUMMARY(), TOP_DYNAMIC_SQL, SNAP_GET_DYN_SQL, MON_CURRENT_SQL, MON_CONNECTION_SUMMARY etc.
A basic production level database in Heroku implements a 400Mb cache. I have a production site running 2 dynos and a worker which is pretty heavy on reads and writes. The database is the bottleneck in my app.
A write to the database will invalidate many queries, as searches are performed across the database.
My question is, given the large jump in price between the $9 starter and $50 first level production database, would migrating be likely to give a significant performance improvement?
"Faster" is an odd metric here. This implies something like CPU, but CPU isn't always a huge factor in databases, especially if you're not doing heavy writes. You Basic database has 0mb of cache – every query hits disk. Even a 400mb cache will seem amazing compared to this. Examine your approximate dataset size; a general rule of thumb is for your dataset to fit into cache. Postgres will manage this cache itself, and optimize for the most referenced data.
Ultimately, Heroku Postgres doesn't sell raw performance. The benefits of the Production-tier are multiple, but to name a few: In-memory Cache, Fork/Follow support, 500 available connections, 99.95% expected uptime.
You will definitely see performance boost in upgrading to a Production-tier plan, however it's near impossible to claim it to be "3x faster" or similar, as this is dependent on how you're using the database.
It sure is a steep step, so the question really is how bad is the bottleneck? It will cost you 40 dollar extra, but once your app runs smooth again it could also mean more revenue. Of course you could also consider other hosting services, but personally I like Heroku the best (albeit cheaper options are available). Besides, you are already familiar with Heroku. There is some more information on Heroku devcenter, regarding their different plans:
https://devcenter.heroku.com/articles/heroku-postgres-plans:
Production plans
Non-production applications, or applications with minimal data storage, performance or availability requirements can choose between one of the two starter tier plans, dev and basic, depending on row requirements. However, production applications, or apps that require the features of a production tier database plan, have a variety of plans to choose from. These plans vary primarily by the size of their in-memory data cache.
Cache size
Each production tier plan’s cache size constitutes the total amount of RAM given to Postgres. While a small amount of RAM is used for managing each connection and other tasks, Postgres will take advantage of almost all this RAM for its cache.
Postgres constantly manages the cache of your data: rows you’ve written, indexes you’ve made, and metadata Postgres keeps. When the data needed for a query is entirely in that cache, performance is very fast. Queries made from cached data are often 100-1000x faster than from the full data set.
Well engineered, high performance web applications will have 99% or more of their queries be served from cache.
Conversely, having to fall back to disk is at least an order of magnitude slower. Additionally, columns with large data types (e.g. large text columns) are stored out-of-line via TOAST, and accessing large amounts of TOASTed data can be slow.
Hi all i'm completely new to maintenance tasks on SQL Server. I've set up a datawharehouse, that basically reads a load of xml files and imports this data into several tables using an SSIS. Now i've set indexes on the tables concerned and optimized my ssis. However i know that i should perform some maintenance tasks but i dont really know where to begin. We are talking about quite a bit of data, we are keeping data for up to 6 months and so far we have 3 months worth of data and the database is currently 147142.44 MB with roughly 57690230 rows in the main table. So it could easily double in size. Just wondering what your recommendations are?
While there is the usual index rebuild and statistics update which are part of normal maintenance, I would look at all of the currently long running queries and try to do some index tuning, before the data size grows. Resizing the database also forms part of a normal maintenance plan, if you can predict the growth and allocate enough space between maintenance runs then you can avoid the performance hit of space auto allocation (which will always happen at the worst possible time)