Should I set ARITHABORT in SQL Server 2014 - sql

I am looking at some code that was written a while ago, in the database helper before a stored procedure is called they set ARITHABORT ON.
From my understanding this is not needed for versions of SQL Server later than 2005, if the ANSI_Warnings is ON.
Do I still need to set this ? Does it provide a performance benefit?
Edit 1 : According to this article I do not need to set it, but I can not find another definite answer on this.

If you look at SET ARITHABORT, setting ANSI_WARNINGS to ON will automatically set ARITHABORT to ON as well with a compatibility level at 90 or higher (SQL Server 2005 or above):
Setting ANSI_WARNINGS to ON implicitly sets ARITHABORT to ON when the database compatibility level is set to 90 or higher. If the database compatibility level is set to 80 or earlier, the ARITHABORT option must be explicitly set to ON.
With compatibility level 80 you have to manually set it.
This is also possible that your software set it to off when it opens a connection and the only solution was to add it to the procedure.
After upgrading to compatibility level 90 or higher, you should run:
DBCC FREEPROCCACHE
If will remove execution plan and recompile procedures.
It can be good to also run both commands before new plans get created:
DBCC UPDATEUSAGE(db_name);
EXEC sp_updatestats;
I assume this database might be old (SQL Server 2000 or earlier) and it may be good to run this as well:
DBCC CHECKDB WITH DATA_PURITY;
DBCC CHECKDB will check the DB and its data(types) used and make sure everything is fine with your new version and compatibility level.

In our database we had some SPs running much faster with this, even on later SQL Servers.
This occurs, if your database is older and was pushed through times with upgrade scripts. In former days the default was not "ON" and therefore older databases might still work with a bad default.

Related

Query against a view under master database is much slower than query directly under specific database

I am not sure whether there exists a general answer before I give more details.
For exmaple: I have a view named vw_View
I tried the following two queries to get the result:
Under master database select * From [test].[dbo].[vw_View]
Under test database select * From [dbo].[vw_View]
Could anyone tell me why query against the same query but from master database is much slower than query against from the other databases, I even tried the others by:
Use [db] --any other databases not master database
select * From [test].[dbo].[vw_View]
I have checked the actual execution plan, the join order differs but why it will change since I have already specify [test].[dbo].[vw_View] when under master
Just out of curiosity, thanks in advance.
Note this might not be the answer but it was too much text for a comment anyway...
One thing that we hear about a lot is when the developers complain about a slow running procedure which only runs slow when called from the application but runs fine when executing from the SSMS.
More often than not it is due to the different execution settings depending on from where the procedure is being called. To check if there is a difference in those setting I usually use SQL Profiler.
In your case you can open two different windows in the SSMS one in the context of Master database and the other in the context of the User Database and run SQL Profiler, the very first event profiler will capture, will be the Event Class = Existing Connections and Text Data = -- network protocol: LPC......
This record will show you all the default settings for each session where your are executing the commands, The settings would look something like....
-- network protocol: LPC
set quoted_identifier on
set arithabort off
set numeric_roundabort off
set ansi_warnings on
set ansi_padding on
set ansi_nulls on
set concat_null_yields_null on
set cursor_close_on_commit off
set implicit_transactions off
set language us_english
set dateformat mdy
set datefirst 7
set transaction isolation level read committed
Now compare the settings of both sessions and see what are the differences.
The profiler also has a column SIPD which will help you to identify which window is which. I am pretty sure the answer is around somewhere there.
Have the same issue - executing a view from master goes infinitely long, but executing the same view "under any other user database on this server" goes only 8 sec.
I have an environment where we just migrated to SQL Server 2017 and "all other databases" have Compatibility Level = 2008 (or 2012)
So I did a few tests:
If I create a new DB with default Compatibility Level = 2017 and run the query it executes infinitely long
If I change Compatibility Level to 2008 and reconnect - 8 sec
If I change Compatibility Level back to 2017 - long run again
And the final thing we have noticed about the query itself - the query is using CHARINDEX function and if I comment it out the query executes equally 8 sec for both compatibility levels.
So... it looks like we have an mixed issue with CHARINDEX function execution on legacy database under Compatibility Level = 2017 context.
The solution is (if you can call it this way...) - to execute legacy queries under (the same) legacy execution context.

SSIS Execute sql task

I have created EXECUTE SQL TASK in the SSIS package.
I am getting the Error called "INSERT failed because the following SET options have incorrect settings:
"ARITHABORT. Varify the set option are correct for use with indexed
views and/or indexes on computed columns or filtered indexes and query
notification"
But when i am trying execute ditectly in to SQL server management studio.It wont give any error.
Please let me know if you guys has come across this kind of issue.
Thanks
SET ARITHABORT in conjunction with SET ANSI WARNINGS controls how divide by zero and overflow errors are handled.
If you want to ignore the overflow and divide by zero, use this in front of your batch:
SET ARITHABORT OFF
SET ANSI WARNINGS OFF
If your database compatibility level is 80 or earlier, SET ARITHABORT must be on.

SQL Server 2000 to SQL Server 2008 R2 migration

I am using SQL Server 2000 including 77 databases, and I want to migrate them to the new SQL Server 2008 R2.
I can do this operation individually with attach or restore commands. Is there any script for migrating 77 databases to the new server installed SQL Server 2008 R2.
Thank you
You could write a script to backup in a loop and restore on another server
As long as the backup files are visible to both servers. This would allow a WITH MOVE option to allow for different drives/folders.
Backups are smaller than MDFs/LDFs too to less copying
You will need to produce your own script as you would really want to do more than backup and restore.
Other things you might like to do is run DBCC UpdateUsage, set the compatibility level, update the stats, run DBCC CheckDB with Data_Purity, change the page verify option to checksum. You may have replication and full text catalogues to deal with as well. All these things would probably need to go into your script.
You would need to setup a script that performs all/some/more of the things mentionned previously on a database and then extend your script to loop through all your databases. This can be done using a combination of batch files or powershell files and utilizing sqlcmd.
For example this is one script I run after restoring the backups onto the new server. This is called from a windows batch file via sqlcmd.
USE [master]
GO
ALTER DATABASE [$(DATABASENAME)] SET COMPATIBILITY_LEVEL = 100
ALTER DATABASE [$(DATABASENAME)] SET PAGE_VERIFY CHECKSUM WITH NO_WAIT
GO
Use [$(DATABASENAME)]
Go
Declare #DBO sysname
--who is the sa user
Select #DBO = name
from sys.server_principals
Where principal_id = 1
--assign sa the DB owner
exec ('sp_changedbowner ''' + #DBO +'''')
go
--fix the counts
dbcc updateusage (0)
go
--check the db include the column value integrity
dbcc checkdb(0) With Data_Purity, ALL_ERRORMSGS, NO_INFOMSGS
go
--make sure the stats are up to date
exec sp_updatestats
Go
You could use a software tool like Sql Compare.
Failing that I would script them individually.
You could run round the internal sysobjects table and build a combined script, but I wouldn't.

SQL Server: Snapshot transaction problem with synonyms in Express Edition

We have 2 databases, say DB1 and DB2.
DB1 contains all the stored procedures which access also data in DB2.
DB1 uses synonyms to access the tables in DB2.
(Using synonyms is a requirement in our situation)
This works perfectly fine in all situations with SQL Server 2005 Developer Edition.
However in the Express Edition, we get an exception when we do the following:
1 Restart SQL Server
2 Execute the following code within DB1:
set transaction isolation level snapshot
begin transaction
declare #sQuery varchar(max)
set #sQuery = 'Select * from synToSomeTableInDB2'
exec (#sQuery)
commit transaction
This will result in the following error:
Snapshot isolation transaction failed in database '...' because the database was not recovered when the current transaction was started. Retry the transaction after the database has recovered.
The same select query passes fine when used without the EXEC or when run on the Developer Edition.
Restarting the server in step 1 is important as once a connection was made to DB2, the code runs also fine on SQL Server Express Edition.
Does anyone have an idea what this is? We need to be able to use EXEC for some dynamic queries.
We've already checked MSDN, searched Google, ...
Any help is greatly appreciated.
--- Edit: March 10 09
As discussed with Ed Harper below, I've filed a bug report for this.
See https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=422150
As found out via Microsoft Connect, the problem is that by default on SQL Server Express Edition the AUTO_CLOSE option is set on true.
Changing this option to false fixes the problem.
The error message suggests that the query fails because SQL server is still recovering the database following the service restart when you execute your query.
Does the error always occur on the first attempt to run this code, regardless of the time elapsed since the service was restarted?
Can you confirm from the SQL Server log that the database is recovering correctly after the restart?

Where's the best place to SET NOCOUNT?

For a large database (thousands of stored procedures) running on a dedicated SQL Server, is it better to include SET NOCOUNT ON at the top of every stored procedure, or to set that option at the server level (Properties -> Connections -> "no count" checkbox)? It sounds like the DRY Principle ("Don't Repeat Yourself") applies, and the option should be set in just one place. If the SQL Server also hosted other databases, that would argue against setting it at the server level because other applications might depend on it. Where's the best place to SET NOCOUNT?
Make it the default for the server (which it would be except for historical reasons). I do this for all servers from the start. Ever wonder why it's SET NOCOUNT ON instead of SET COUNT OFF? It's because way way back in Sybase days the only UI was the CLI; and it was natural to show the count when a query might show no results, and therefore no indication it was complete.
Since it is a dedicated server I would set it at the server level to avoid having to add it to every stored procedure.
The only issue would come up is if you wanted a stored procedure that did not have no-count.