Where's the best place to SET NOCOUNT? - sql

For a large database (thousands of stored procedures) running on a dedicated SQL Server, is it better to include SET NOCOUNT ON at the top of every stored procedure, or to set that option at the server level (Properties -> Connections -> "no count" checkbox)? It sounds like the DRY Principle ("Don't Repeat Yourself") applies, and the option should be set in just one place. If the SQL Server also hosted other databases, that would argue against setting it at the server level because other applications might depend on it. Where's the best place to SET NOCOUNT?

Make it the default for the server (which it would be except for historical reasons). I do this for all servers from the start. Ever wonder why it's SET NOCOUNT ON instead of SET COUNT OFF? It's because way way back in Sybase days the only UI was the CLI; and it was natural to show the count when a query might show no results, and therefore no indication it was complete.

Since it is a dedicated server I would set it at the server level to avoid having to add it to every stored procedure.
The only issue would come up is if you wanted a stored procedure that did not have no-count.

Related

Check database / server before executing query

I am frequently testing certain areas on a development server and so running a pre-defined SQL statement to truncate the tables in question before testing again. It would only be a slip of a key to switch to the live server.
I'm looking for an IF statement or similar to prevent that.
Either to check the server name, database name, or even that a certain record in a different table exists before running the query.
Any help appreciated
For such cases I use stored procedures. I'd call them TestTruncateTables, etc.
Then instead of calling TRUNCATE TABLE you should CALL TestTruncateTables.
Just make sure that the procedures are not created on the live server. If by any chance you happen to run CALL TestTruncateTables on the live server you only get an error about non-existing proc.

Query against a view under master database is much slower than query directly under specific database

I am not sure whether there exists a general answer before I give more details.
For exmaple: I have a view named vw_View
I tried the following two queries to get the result:
Under master database select * From [test].[dbo].[vw_View]
Under test database select * From [dbo].[vw_View]
Could anyone tell me why query against the same query but from master database is much slower than query against from the other databases, I even tried the others by:
Use [db] --any other databases not master database
select * From [test].[dbo].[vw_View]
I have checked the actual execution plan, the join order differs but why it will change since I have already specify [test].[dbo].[vw_View] when under master
Just out of curiosity, thanks in advance.
Note this might not be the answer but it was too much text for a comment anyway...
One thing that we hear about a lot is when the developers complain about a slow running procedure which only runs slow when called from the application but runs fine when executing from the SSMS.
More often than not it is due to the different execution settings depending on from where the procedure is being called. To check if there is a difference in those setting I usually use SQL Profiler.
In your case you can open two different windows in the SSMS one in the context of Master database and the other in the context of the User Database and run SQL Profiler, the very first event profiler will capture, will be the Event Class = Existing Connections and Text Data = -- network protocol: LPC......
This record will show you all the default settings for each session where your are executing the commands, The settings would look something like....
-- network protocol: LPC
set quoted_identifier on
set arithabort off
set numeric_roundabort off
set ansi_warnings on
set ansi_padding on
set ansi_nulls on
set concat_null_yields_null on
set cursor_close_on_commit off
set implicit_transactions off
set language us_english
set dateformat mdy
set datefirst 7
set transaction isolation level read committed
Now compare the settings of both sessions and see what are the differences.
The profiler also has a column SIPD which will help you to identify which window is which. I am pretty sure the answer is around somewhere there.
Have the same issue - executing a view from master goes infinitely long, but executing the same view "under any other user database on this server" goes only 8 sec.
I have an environment where we just migrated to SQL Server 2017 and "all other databases" have Compatibility Level = 2008 (or 2012)
So I did a few tests:
If I create a new DB with default Compatibility Level = 2017 and run the query it executes infinitely long
If I change Compatibility Level to 2008 and reconnect - 8 sec
If I change Compatibility Level back to 2017 - long run again
And the final thing we have noticed about the query itself - the query is using CHARINDEX function and if I comment it out the query executes equally 8 sec for both compatibility levels.
So... it looks like we have an mixed issue with CHARINDEX function execution on legacy database under Compatibility Level = 2017 context.
The solution is (if you can call it this way...) - to execute legacy queries under (the same) legacy execution context.

How to hide data of uncommitted transactions from other connections?

I am inserting data into multiple tables and expected all data to be invisible to others until I committed them. But in fact some other application is starting to pick up the data before I am done. I verified this by using a delay between inserts and saw the data immediately.
I read about isolation levels, but it looks like i.e. SET TEMPORARY OPTION isolation_level = 3; has no effect when set only on my side.
Is this a difference between Sybase and other databases, or are there just wrong settings somewhere?
I'm using Sybase SQL Anywhere 11+16.
Here is the proper page for isolation levels in SQL Anywhere 11.0.
I think you should use SET OPTION isolation_level=1; on the user accessing your table (or the group PUBLIC).

Can the use or lack of use of "GO" in T-SQL scripts effect the outcome?

We have an SSIS package that ran in production on a SQL 2008 box with a 2005 compatibility setting. The package contains a SQL Task and it appears as though the SQL at the end of the script did not run.
The person who worked on that package noted before leaving the company that the package needed "GOs" between the individual SQL commands to correct the issue. however, when testing in development on SQL Server 2008 with 2008 compatibility, the package worked fine.
From what I know, GO's place commands in batches, where commands are sent to the database provider in a batch, for efficiency's sake. I am thinking that the only way that GO should effect the outcome is if there was an error in that script somewhere above it. I can imagine GO in that case, and only that case, effecting the outcome. However, we have seen no evidence of any errors logged.
Can someone suggest to me whether or not GO is even likely related to the problem? Assuming no error was encountered, my understanding of the "GO" command suggests that it use or lack of use is most likely unrelated to the problem.
The GO keyword is, as you say, a batch separator that is used by the SQL Server management tools. It's important to note, though, that the keyword itself is parsed by the client, not the server.
Depending on the version of SQL Server in question, some things do need to be placed into distinct batches, such as creating and using a database. There are also some operations that must take place at the beginning of a batch (like the use statement), so using these keywords means that you'll have to break the script up into batches.
A couple of things to keep in mind about breaking a script up into multiple batches:
When an error is encountered within a batch, execution of that batch stops. However, if your script has multiple batches, an error in one batch will only stop that batch from executing; subsequent batches will still execute
Variables declared within a batch are available to that batch only; they cannot be used in other batches
If the script is performing nothing but CRUD operations, then there's no need to break it up into multiple batches unless any of the above behavioral differences is desired.
All of your assumptions are correct.
One thing that I've experienced is that if you have a batch of statements that is a pre-requisite for another batch, you may need to separate them with a GO. One example may be if you add a column to a table and then update that column (I think...). But if it's just a series of DML queries, then the absence or presence of GO shouldn't matter.
I've noticed that if you set up any variables in the script their state (and maybe the variables themselves) are wiped after a 'GO' statement so they can't be reused. This was certainly the case on SQL Server 2000 and I presume it will be the case on 2005 and 2008 as well.
Yes, GO can affect outcome.
GO between statements will allow execution to continue if there is an error in between. For example, compare the output of these two scripts:
SELECT * FROM table_does_not_exist;
SELECT * FROM sys.objects;
...
SELECT * FROM table_does_not_exist;
GO
SELECT * FROM sys.objects;
As others identified, you may need to issue GO if you need changes applied before you work on them (e.g. a new column) but you can't persist local or table variables across GO...
Finally, note that GO is not a T-SQL keyword, it is a batch separator. This is why you can't put GO in the middle of a stored procedure, for example ... SQL Server itself has no idea what GO means.
EDIT however one answer stated that transactions cannot span batches, which I disagree with:
CREATE TABLE #foo(id INT);
GO
BEGIN TRANSACTION;
GO
INSERT #foo(id) SELECT 1;
GO
SELECT ##TRANCOUNT; -- 1
GO
COMMIT TRANSACTION;
GO
DROP TABLE #foo;
GO
SELECT ##TRANCOUNT; -- 0

SQL Server, Remote Stored Procedure, and DTC Transactions

Our organization has a lot of its essential data in a mainframe Adabas database. We have ODBC access to this data and from C# have queried/updated it successfully using ODBC/Natural "stored procedures".
What we'd like to be able to do now is to query a mainframe table from within SQL Server 2005 stored procs, dump the results into a table variable, massage it, and join the result with native SQL data as a result set.
The execution of the Natural proc from SQL works fine when we're just selecting it; however, when we insert the result into a table variable SQL seems to be starting a distributed transaction that in turn seems to be wreaking havoc with our connections.
Given that we're not performing updates, is it possible to turn off this DTC-escalation behavior?
Any tips on getting DTC set up properly to talk to DataDirect's (formerly Neon Systems) Shadow ODBC driver?
Check out SET REMOTE_PROC_TRANSACTIONS OFF which should disable it.
Or sp_serveroption to configure the linked server generally, not per batch.
Because you are writing on the MS SQL side, you start a transaction.
By default, it escalates whether it needs to or not.
Even though the table variable does not particapate in the transaction.
I've had similar issues before where the MS SQL side behaves differently based on if MS SQL writes, in a stored proc and other stuff. The most reliable way I found was to use dynamic SQL calls to my Sybase linked server...
The following code sets the "Enable Promotion of Distributed Transactions" for linked servers:
USE [master]
GO
EXEC master.dbo.sp_serveroption #server=N'REMOTE_SERVER', #optname=N'remote proc transaction promotion', #optvalue=N'false'
GO
This will allow you to insert the results of a linked server stored procedure call into a table variable.
I'm not sure about DTC, but DTSX (Integration Services) may be useful for moving the data. However, if you can simply query the data, you may want to look at adding a linked server for direct access. You could then just write a simple query to populate your table based on a select from the linked server's table.
That's true. As you might guess, the Natural procedures we want to call do lookups and calculations that we'd like to keep at that level if possible.