Query against a view under master database is much slower than query directly under specific database - sql

I am not sure whether there exists a general answer before I give more details.
For exmaple: I have a view named vw_View
I tried the following two queries to get the result:
Under master database select * From [test].[dbo].[vw_View]
Under test database select * From [dbo].[vw_View]
Could anyone tell me why query against the same query but from master database is much slower than query against from the other databases, I even tried the others by:
Use [db] --any other databases not master database
select * From [test].[dbo].[vw_View]
I have checked the actual execution plan, the join order differs but why it will change since I have already specify [test].[dbo].[vw_View] when under master
Just out of curiosity, thanks in advance.

Note this might not be the answer but it was too much text for a comment anyway...
One thing that we hear about a lot is when the developers complain about a slow running procedure which only runs slow when called from the application but runs fine when executing from the SSMS.
More often than not it is due to the different execution settings depending on from where the procedure is being called. To check if there is a difference in those setting I usually use SQL Profiler.
In your case you can open two different windows in the SSMS one in the context of Master database and the other in the context of the User Database and run SQL Profiler, the very first event profiler will capture, will be the Event Class = Existing Connections and Text Data = -- network protocol: LPC......
This record will show you all the default settings for each session where your are executing the commands, The settings would look something like....
-- network protocol: LPC
set quoted_identifier on
set arithabort off
set numeric_roundabort off
set ansi_warnings on
set ansi_padding on
set ansi_nulls on
set concat_null_yields_null on
set cursor_close_on_commit off
set implicit_transactions off
set language us_english
set dateformat mdy
set datefirst 7
set transaction isolation level read committed
Now compare the settings of both sessions and see what are the differences.
The profiler also has a column SIPD which will help you to identify which window is which. I am pretty sure the answer is around somewhere there.

Have the same issue - executing a view from master goes infinitely long, but executing the same view "under any other user database on this server" goes only 8 sec.
I have an environment where we just migrated to SQL Server 2017 and "all other databases" have Compatibility Level = 2008 (or 2012)
So I did a few tests:
If I create a new DB with default Compatibility Level = 2017 and run the query it executes infinitely long
If I change Compatibility Level to 2008 and reconnect - 8 sec
If I change Compatibility Level back to 2017 - long run again
And the final thing we have noticed about the query itself - the query is using CHARINDEX function and if I comment it out the query executes equally 8 sec for both compatibility levels.
So... it looks like we have an mixed issue with CHARINDEX function execution on legacy database under Compatibility Level = 2017 context.
The solution is (if you can call it this way...) - to execute legacy queries under (the same) legacy execution context.

Related

Query Store is configured but none of my queries under any load show up

SQL Server 2017 Enterprise Query Store is showing no data at all but shows READ_ONLY as the actual mode
The one similar question in this forum has an answer that doesn't apply - none of the exclusions are present.
I ran:
GO
ALTER DATABASE [MyDB] SET QUERY_STORE (OPERATION_MODE = READ_ONLY, INTERVAL_LENGTH_MINUTES = 5, QUERY_CAPTURE_MODE = AUTO)
GO
I also ran all these, having referenced the link below, DB context is MyDB:
https://learn.microsoft.com/en-us/sql/relational-databases/performance/best-practice-with-the-query-store?view=sql-server-2017
ALTER DATABASE MyDB SET QUERY_STORE = ON;
SELECT actual_state_desc, desired_state_desc, current_storage_size_mb,
max_storage_size_mb, readonly_reason, interval_length_minutes,
stale_query_threshold_days, size_based_cleanup_mode_desc,
query_capture_mode_desc
FROM sys.database_query_store_options;
ALTER DATABASE MyDB SET QUERY_STORE CLEAR;
-- Run together...
ALTER DATABASE MyDB SET QUERY_STORE = OFF;
GO
EXEC sp_query_store_consistency_check
GO
ALTER DATABASE MyDB SET QUERY_STORE = ON;
GO
No issues found. The SELECT returns matching Actual and Desired states.
I am a sysadmin role member, who actually sets up all 30+ production servers, and this is the only miscreant.
The server is under heavy load and I need internal-eyes on it, in addition to Solarwinds DPA. I've also run sp_blitzquerystore but it returns an empty rowset from the top query, and just the two priority 255 rows from the 2nd.
What on earth did I do wrong? Any clues, anyone, please?
I know this is an old post but for those who come here looking for answers: I do see you ran the query with OPERATION_MODE = READ_ONLY. This would put it into a read-only mode - a mode in which it only reads what is stored in the query store without collecting any additional information. There will be no information shown if the query store has never been in READ_WRITE mode.
If it has been in READ_WRITE mode before and you are still not seeing anything, it is possible that the heavy load on the server is pushing query plans out of the cache.

Should I set ARITHABORT in SQL Server 2014

I am looking at some code that was written a while ago, in the database helper before a stored procedure is called they set ARITHABORT ON.
From my understanding this is not needed for versions of SQL Server later than 2005, if the ANSI_Warnings is ON.
Do I still need to set this ? Does it provide a performance benefit?
Edit 1 : According to this article I do not need to set it, but I can not find another definite answer on this.
If you look at SET ARITHABORT, setting ANSI_WARNINGS to ON will automatically set ARITHABORT to ON as well with a compatibility level at 90 or higher (SQL Server 2005 or above):
Setting ANSI_WARNINGS to ON implicitly sets ARITHABORT to ON when the database compatibility level is set to 90 or higher. If the database compatibility level is set to 80 or earlier, the ARITHABORT option must be explicitly set to ON.
With compatibility level 80 you have to manually set it.
This is also possible that your software set it to off when it opens a connection and the only solution was to add it to the procedure.
After upgrading to compatibility level 90 or higher, you should run:
DBCC FREEPROCCACHE
If will remove execution plan and recompile procedures.
It can be good to also run both commands before new plans get created:
DBCC UPDATEUSAGE(db_name);
EXEC sp_updatestats;
I assume this database might be old (SQL Server 2000 or earlier) and it may be good to run this as well:
DBCC CHECKDB WITH DATA_PURITY;
DBCC CHECKDB will check the DB and its data(types) used and make sure everything is fine with your new version and compatibility level.
In our database we had some SPs running much faster with this, even on later SQL Servers.
This occurs, if your database is older and was pushed through times with upgrade scripts. In former days the default was not "ON" and therefore older databases might still work with a bad default.

Execution plan cost estimation

There is a problem in which i have to update a table with millions of record based on certain conditions.I have written a long shell script and SQL statements to do it.To check the performance , i plan on using explain plan ,studying it from http://docs.oracle.com/cd/B10500_01/server.920/a96533/ex_plan.htm#19259
Here it is written that "Execution plans can differ due to the following:"
Different Costs->
Data volume and statistics
Bind variable types
Initialization parameters - set globally or at session level
Here i dont understand how Initialization parameters - set globally or at session level affects execution plan.
can anybody explain this?
Also is there any other way i can check the SQL statements for performance other than explain plan or autotrace.
There are several (initialization) parameters that can influence the execution plan for your statement. The one that immediately comes to mind is OPTIMIZER_MODE
Other not so obvious session are things like NLS settings that might influence the usability of indexes.
An alternative approach to get the real execution plan (apart from tracing the session and using tkprof) is to use the /*+ gather_plan_statistics */ hint together with 'dbms_xplan.display_cursor()'.
This is done by actually running the statement using the above hint first (so this does take longer than a "normal" explain):
select /*+ gather_plan_statistics */ *
from some_table
join ...
where ...
Then after that statement is finished, you can retrieve the used plan using dbms_xplan:
SELECT *
FROM table(dbms_xplan.display_cursor(format => 'ALLSTATS LAST');
GLOBAL OR SESSION parameters
Oracle is setup with a set of initialisation parameters. Those will be used by default if nothing is specified to override them. They can be overriden by using an ALTER SESSION (just affects a single user) or ALTER SYSTEM (affects all users until Oracle is restarted) commands to change things at the session or system level or by using optimiser hints in the code. These can have an effect on the plans you see.
In relation to explain plan, a different Oracle database may have different initialisation parameters or have some session/system parameters set which could mean the SAME code behaves differently (by getting a different execution plan on one Oracle database compared to another Oracle database).
Also, as the execution plan is affected by the data chosen, it's possible that a query or package that runs fine in TEST never finishes in PRODUCTION where the volume of data is much larger. This is a common issue when code is not tested with accurate volumes of data (or at least with the table statistics imported from a production database if test cannot hold a full volume of production like data).
TRACING
The suggestions so far tell you how to trace an individual statement assuming you know which statement has a problem but you mention you have a shell script with several SQL statements.
If you are using a here document with a single call to SQL plus containin several SQL statements like this ...
#!/bin/ksh
sqlplus -S user/pass <<EOF
set heading off
BEGIN
-- DO YOUR FIRST SQL HERE
-- DO YOUR SECOND SQL HERE
END;
/
EOF
... you can create a single oracle trace file like this
#!/bin/ksh
sqlplus -S user/pass <<EOF
set heading off
BEGIN
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8'
-- DO YOUR FIRST SQL HERE
-- DO YOUR SECOND SQL HERE
END;
/
EOF
Note the level 8 is for tracing with WAITS. You can do level 4 (bind variables), and level 12 (binds and waits) but i've always found the problem just with level 8. Level 12 can also take a lot longer to execute and in full size environments.
Now run the shell script to do a single execution, then check where your trace file is created in SQL PLUS using
SQL> show parameter user_dump_dest
/app/oracle/admin/rms/udump
Go to that directory and if no other tracing has been enabled, there will be a .trc file that contains the trace of the entire run of the SQL in your script.
You need to convert this to readable format with the unix tkprof command like this
unix> tkprof your.trc ~/your.prf sys=no sort=fchela,exeela
Now change to your home directory and there will be a .prf file with the SQL statements listed in order of those that take the most execution time or fetch time to execute along with the explain plans. This set of parameters to tkprof allow you to focus on fixing the statements that take the longest and therefore have the biggest return for tuning.
Note that if your shell script runs several sqlplus commands, each one will create a separate session and therefore adding an ALTER SESSION statement to each one will create separate trace files.
Also, dont forget that it's easy to get lost in the detail, sometimes tuning is about looking at the overall picture and doing the same thing another way rather than starting by working on a single SQL that may gain a few seconds but in the overall scheme doesnt help to reduce the overall run time.
If you can, try to minimise the number of update statements as if you have one big table, it will be more efficient if you can do the updates in one pass of the table (and have indexes supporting the updates) rather than doing lots of small updates.
Maybe you can post the relevant parts of your script, the overall time it takes to run and the explain plan if you need more assistance.
Personally I only trust the rowsource operations because this gives the exact plan as is executed. There do exist a few parameters that have effect on how the plan is constructed. Most parameters will be set on instance level but can be over ridden on session level. This means that every session could have it's own set of effective parameters.
Problem you have is that you need to know the exact settings of the session that is going to run your script. There are a few ways to change session level settings. Settings can be changed in a logon trigger, in a stored procedure or in the script.
If your script is not influenced by a logon trigger and does not call any code that issues alter session statements, you will be using the settings that your instance has.

SQL Server: Snapshot transaction problem with synonyms in Express Edition

We have 2 databases, say DB1 and DB2.
DB1 contains all the stored procedures which access also data in DB2.
DB1 uses synonyms to access the tables in DB2.
(Using synonyms is a requirement in our situation)
This works perfectly fine in all situations with SQL Server 2005 Developer Edition.
However in the Express Edition, we get an exception when we do the following:
1 Restart SQL Server
2 Execute the following code within DB1:
set transaction isolation level snapshot
begin transaction
declare #sQuery varchar(max)
set #sQuery = 'Select * from synToSomeTableInDB2'
exec (#sQuery)
commit transaction
This will result in the following error:
Snapshot isolation transaction failed in database '...' because the database was not recovered when the current transaction was started. Retry the transaction after the database has recovered.
The same select query passes fine when used without the EXEC or when run on the Developer Edition.
Restarting the server in step 1 is important as once a connection was made to DB2, the code runs also fine on SQL Server Express Edition.
Does anyone have an idea what this is? We need to be able to use EXEC for some dynamic queries.
We've already checked MSDN, searched Google, ...
Any help is greatly appreciated.
--- Edit: March 10 09
As discussed with Ed Harper below, I've filed a bug report for this.
See https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=422150
As found out via Microsoft Connect, the problem is that by default on SQL Server Express Edition the AUTO_CLOSE option is set on true.
Changing this option to false fixes the problem.
The error message suggests that the query fails because SQL server is still recovering the database following the service restart when you execute your query.
Does the error always occur on the first attempt to run this code, regardless of the time elapsed since the service was restarted?
Can you confirm from the SQL Server log that the database is recovering correctly after the restart?

Where's the best place to SET NOCOUNT?

For a large database (thousands of stored procedures) running on a dedicated SQL Server, is it better to include SET NOCOUNT ON at the top of every stored procedure, or to set that option at the server level (Properties -> Connections -> "no count" checkbox)? It sounds like the DRY Principle ("Don't Repeat Yourself") applies, and the option should be set in just one place. If the SQL Server also hosted other databases, that would argue against setting it at the server level because other applications might depend on it. Where's the best place to SET NOCOUNT?
Make it the default for the server (which it would be except for historical reasons). I do this for all servers from the start. Ever wonder why it's SET NOCOUNT ON instead of SET COUNT OFF? It's because way way back in Sybase days the only UI was the CLI; and it was natural to show the count when a query might show no results, and therefore no indication it was complete.
Since it is a dedicated server I would set it at the server level to avoid having to add it to every stored procedure.
The only issue would come up is if you wanted a stored procedure that did not have no-count.