SQL Server 2012 - Why would changing the compatibility mode affect parallelism in a query? - sql-server-2012

We changed the compatibility mode from 100 (2008) to 110 (2012) on a 2012 Enterprise SQL server last week.
Since then we have found that the performance on a stored procedure has nose dived from 18 minutes to 48 hours (before we killed it).
I changed the compatibility mode back to 100 and the sproc runs in 18 mins again. Comparing plans from before and after, the 110 version has parallelism at every step of the way, the 100 version has none whatsoever. Other stored procedures (about 50) have been running at the normal speed.
This is totally new ground for me, why would changing the compatibility mode from 100 to 110 cause a huge increase in parallelism on just one stored procedure?

Changing to higher compatibility level helps sql engine to pick plans as per the latest improvement. So changing the compatibility level is not the main reason for your query slowness. You should use query hint OPTION(MAXDOP n) to find the RCA.
Check the query plan before and after changing the compatibility level. After changing compatibility level check it's plan on MAXDOP 1 by using query hint and then check the plan with MAXDOP n, where n is >1(depends on server hardware).
And don't forget to analyse the query plan with changed compatibility level.

Related

Run stored procedure into different compability level than database

I have an stored that for some reason only runs with SQL Server 2008 compability level in 19 seconds, if I changed it to Compability level to 2017 it takes like 10 minutes to execute, there is any way to execute select statement in stored procedure with compability of 2008? instead change all database compability?
One of the recommendations I have seen on line when updating to SQL 2107 it that you set the compatibility level to your old server and turn on querystore. Run this for some amount of time to allow the system to capture query plans. Change compatibility level to 2017 and when you find slow running code either fix the query or force the plan to use the one that works better. Or you could set the system to do it for you by turning on autotuning.
You can find information about these at:
https://learn.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017
and
https://learn.microsoft.com/en-us/sql/relational-databases/automatic-tuning/automatic-tuning?view=sql-server-2017

Changed Compatibility Level From 100 to 130 - Why did performance regress?

We are working on SQL Server 2016 and noticed that the master was on compatibility level 130 but one of our production databases was still on 2008 level 100 so we switched it and we are currently taking a performance hit on certain stored procs/functions. Our main function that once took 20 seconds is now taking over a minute. I updated the stats and cleared the query plan cache for that one object - should it be cleared for the entire database? Any other suggestions? We just did this like 30 minutes ago.

SQL Server 2014 - some queries very slow (cardinality estimator)

In our production environment we had several servers with SQL server 2012 SP2+Windows Server 2008R2. 3 month ago we migrate all the servers to the SQL Server 2014 SP1+Windows Server 2012 R1. We created new servers with new configuration (more RAM, more CPU, more Disk space) and backup our databases from SQL Server 2012 --> restore to the new SQL Server 2014 servers. After restore we changed compatibility level from 110 to the 120+Rebuild Index+Update statistics.
But now we have problems with several queries which running very slow when compatibility level 120. If we change compatibility level to the old 110 it is running very fast.
I search a lot about this issue, but did not find anything.
SQL Server 2014 introduces new cardinality estimator
One of the performance improvement in SQL Server 2014 is the redesign of cardinality estimation. The component which does cardinality estimation (CE) is called cardinality estimator. It is the essential component of SQL query processor for query plan generation. Cardinality estimates are predictions of final row count and row counts of intermediate results (such as joins, filtering and aggregation). These estimates have direct impact on plan choices such as join order, join type etc. Prior to SQL Server 2014, cardinality estimator was largely based on SQL Server 7.0 code base. SQL Server 2014 introduces new design and the new cardinality estimator is based on research on modern workloads and learning from past experience.
Trace flags 9481 and 2312 can be used to control which version of Cardinality Estimator is used.
Check queries which cause problem and compare execution plans properties estimated number of rows vs actual number of rows values in 2008 and 2014.
Cardinality Estimates in Microsoft SQL Server 2014
From SQL Server 2016+ you could set old cardinality estimator per database without using traceflags or changing DB compatibility level to 110.
ALTER DATABASE SCOPED CONFIGURATION
This statement enables the configuration of a number of database configuration settings at the individual database level, independent of these settings for any other database.
LEGACY_CARDINALITY_ESTIMATION = { ON | OFF | PRIMARY }
Enables you to set the query optimizer cardinality estimation model to the SQL Server 2012 and earlier version independent of the compatibility level of the database. This is equivalent to Trace Flag 9481. To set this at the instance level, see Trace Flags (Transact-SQL). To accomplish this at the query level, add the QUERYTRACEON query hint.
ON
Sets the query optimizer cardinality estimation model to the SQL Server 2012 and earlier version of the cardinality estimation model.
OFF
Sets the query optimizer cardinality estimation model based on the compatibility level of the database.
ALTER DATABASE SCOPED CONFIGURATION SET LEGACY_CARDINALITY_ESTIMATION = ON;

Generate Script In SQL Server 2008 Is Very Slow

Time of Get Script From My Database In SQL Server Management Studio 2008 R2 is about 45:00 Minute.
count of tables : 380
Count of views : 89
Count of SP : 109
look your tempdb, sometimes, the operation is slow, maybe sqlserver divide space for other objects.
This speed will depends on you system's or server's configuration. If you system's configuration is low in that case it will takes much more time. Second this will depends on you database size suppose you are saving images or pdf files in binary mode in database in that case it will takes long time to generate scripts.....

high 'total_worker_time' for stored proc using OPENQUERY in SQL Server 2005

[Cross posted from the Database Administrators site, in the hope that it may gain better traction here. I'll update either site as appropriate.]
I have a stored procedure in SQL Server 2005 (SP2) which contains a single query like the following (simplified for clarity)
SELECT * FROM OPENQUERY(MYODBC, 'SELECT * FROM MyTable WHERE Option = ''Y'' ')
OPTION (MAXDOP 1)
When this proc is run (and run only once) I can see the plan appear in sys.dm_exec_query_stats with a high 'total_worker_time' value (eg. 34762.196 ms). This is close to the elapsed time. However, in SQL Management Studio the statistics show a much lower CPU time, as I'd expect (eg. 6828 ms). The query takes some time to return, because of the slowness of the server it is talking to, but it doesn't return many rows.
I'm aware of the issue that parallel queries in SQL Server 2005 can present odd CPU times, which is why I've tried to turn off any parallism with the query hint (though I really don't think that there was any in any case).
I don't know how to account for the fact that the two ways of looking at CPU usage can differ, nor which of the two might be accurate (I have other reasons for thinking that the CPU usage may be the higher number, but it's tricky to measure). Can anyone give me a steer?
UPDATE: I was assuming that the problem was with the OPENQUERY so I tried looking at times for a long-running query which doesn't use OPENQUERY. In this case the statistics (gained by setting STATISTICS TIME ON) reported the CPU time at 3315ms, whereas the DMV gave it at 0.511ms. The total elapsed times in each case agreed.
total_worker_time in sys.dm_exec_query_stats is cumulative - it is the total execution time for all the executions of the currently compiled version of the query - see execution_count for the number of executions this represents.
See last_worker_time, min_worker_time or max_worker_time for timing of individual executions.
reference: http://msdn.microsoft.com/en-us/library/ms189741.aspx