This is a minor question regarding the usage and semantics of the NOCOUNT statement. I've seen it used a couple different ways and I want to know what is actually required or not.
I've seen it listed on MSDN with the trailing semicolon and GO statement like such:
SET NOCOUNT ON;
GO
and I've seen in without the trailing semicolon:
SET NOCOUNT ON
GO
and I've seen it without the GO statement
SET NOCOUNT ON
I realize that the GO simply signals the end of a batch, but should this be called in order for the NOCOUNT to take effect?
And what is the point of the semicolon?
A semicolon ends the current SQL statement.
To the best of my knowledge, it isn't needed after SET NOCOUNT ON.
You should not need 'GO' to have NOCOUNT take effect, though I'm less certain of that.
A ';' is a memory terminator I always though and a GO statement is a batch terminator.
So if you do DDL creation such as creating a proc, view, function, or other object you can do a bunch like:
Create proc blah as ....
GO
create proc blah2 as ....
GO
And then you can have a single nice creation script. If you did not have the GO's it would break as it would say something like: "Create (thing) must be the first statement in a creation ...." This means SQL thought you were doing a single operation for both. 'GO' says: "NEW SCOPE, NEW OBJECT". So it gets around that. If you look at the creation scripts for pubs and Northwind ( the old MS test databases) I believe they all are using batch terminators for a single '*.sql' file. It makes a bunch of creation possible in a single file.
A ; will just terminate the memory up to a statement. Most of the time it will be fine to omit them but a big place some of you SQL experts will know you cannot get away from this is..... CTE's!
Yes a CTE will yell at you right away because it begins with a 'with' but you can also use (nolock) hints with 'with' so it needs to differentiate between the two transactions and THUS you should use a ';'.
EG:
Select * from table -- standard SQL no biggie
Or
Select * from table
Select * from table2 -- these are fine stacked and will run
But...
Select * from table
with a as (select * from table2) select * from a
will break immediately because it did not know that 'with' s context was changed to a new statement. Proper SQL if you are being meticulous should be like:
Set NoCount ON; -- No thank you engine I don't need to see counts
Set Transaction Level Isolation Level Read Uncommitted; -- Set me to dirty reads as default
Select
*
from table
;
Select
*
from table2
;
SQL's Engine see this as:
Set NoRead ON;-- No thank you engine I don't need to see counts\nSet Transaction Level Isolation Level Read Uncommitted;\n-- Set me to dirty reads as default\n\nSelect\n*\nfrom table\n;\n\nSelect\n*\nfrom table2\n;
So it needs a little help from the person telling it where the white space TERMINATES. Or else it is not human and does not know where one statement stopped and another one began.
Whatever you do if you were writing it for others and under well defined guidelines I was always told to do the ';' terminator to make it official ending sequence.
A GO is a batch terminator but you can change contexts with it, which make it useful for switching databases like:
Use Database1
GO
Select * from TableOnDatabase1;
Use Database2
GO
Select * from TableOnDatabase2;
Also to save space I did a single line but really you should be doing your main sql syntaxes on a separate line and also sub syntax like:
Select
ColumnA
, ColumnB
, count(ColumnC) as cnt
From table
Where thing happens
Group by
ColumnA
, ColumnB
Having Count(ColumnC) > 1
Order by ColumnA
EDIT for common real world example:
set nocount on;
declare #Table table ( ints int);
declare #CursorInt int = 1;
while #CursorInt <= 100
begin
insert into #Table values (#CursorInt)
set #CursorInt += 1
End
-- wait a second engine you did not tell me what happened in the 'Messages' section?!
-- aw come on I want to see each transaction!
Set nocount off;
while #CursorInt <= 200
begin
insert into #Table values (#CursorInt)
set #CursorInt += 1
End
-- okay that is annoying I did not have to see 100: "(1 row(s) affected)"
You can turn on and off 'nocount' with memory terminators as much as you want in the scope of a procedure. I do it all the time when I want to see some inserts and ignore others in my procs. And in some if I want to pass them out I then set an output variable or a simple select of a final rowcount for a return.
Related
I have encountered an issue in a stored procedure which incrementally copies data from one table to another:
DECLARE #StartId bigint;
SELECT #StartId = COALESCE(MAX(Id), 0)
FROM dbo.TargetTable WITH (NOLOCK);
INSERT INTO dbo.TargetTable (...)
SELECT ...
FROM dbo.SourceTable
WHERE Id > #StartId
This stored procedure hanged for more than 15 minutes and I killed it. However if I run the two parts separately,
SELECT COALESCE(MAX(Id), 0)
FROM dbo.TargetTable WITH (NOLOCK);
it returns immediately with the head ID being, say, 100000.
Then I replaced the #StartId variable with 100000 in the INSERT statement and run it,
INSERT INTO dbo.TargetTable (...)
SELECT ...
FROM dbo.SourceTable
WHERE Id > 100000
This part finishes in less than a few seconds.
It appears that in the original stored procedure, the variable #StartId is inlined into the INSERT statement resulting in a deadlock.
I am aware there might be better ways or workarounds such as storing the progress in a third table, however, my question is can I force the variable to be evaluated before entering the INSERT statement?
Edit: Since this is inside a stored procedure, I don't have the option to use GO.
I've just found out the cause is not variable expression being inlined, instead, it's an issue with execution plan with regards to variables.
There is already a Q/A on that and the answer of applying OPTION(RECOMPILE) works for me very well:
SQL Server Query: Fast with Literal but Slow with Variable
this may seem very straight forward but can you put them in two different transactions... so first you do your
BEGIN TRANSACTION GetMax
SELECT COALESCE(MAX(Id), 0) FROM dbo.TargetTable WITH (NOLOCK);
GO
INSERT INTO dbo.TargetTable (...)
SELECT ...
FROM dbo.SourceTable
WHERE Id > 100000
GO
COMMIT TRANSACTION GetMax;
GO
can you try this?
I am writing a T-SQL stored procedure that conditionally adds a record to a table only if the number of similar records is below a certain threshold, 10 in the example below. The problem is this will be run from a web application, so it will run on multiple threads, and I need to ensure that the table never has more than 10 similar records.
The basic gist of the procedure is:
BEGIN
DECLARE #c INT
SELECT #c = count(*)
FROM foo
WHERE bar = #a_param
IF #c < 10 THEN
INSERT INTO foo
(bar)
VALUES (#a_param)
END IF
END
I think I could solve any potential concurrency problems by replacing the select statement with:
SELECT #c = count(*) WITH (TABLOCKX, HOLDLOCK)
But I am curious if there any methods other than lock hints for managing concurrency problems in T-SQL
One option would be to use the sp_getapplock system stored procedure. You can place your critical section logic in a transaction and use the built in locking of sql server to ensure synchronized access.
Example:
CREATE PROC MyCriticalWork(#MyParam INT)
AS
DECLARE #LockRequestResult INT
SET #LockRequestResult=0
DECLARE #MyTimeoutMiliseconds INT
SET #MyTimeoutMiliseconds=5000--Wait only five seconds max then timeouit
BEGIN TRAN
EXEC #LockRequestResult=SP_GETAPPLOCK 'MyCriticalWork','Exclusive','Transaction',#MyTimeoutMiliseconds
IF(#LockRequestResult>=0)BEGIN
/*
DO YOUR CRITICAL READS AND WRITES HERE
*/
--Release the lock
COMMIT TRAN
END ELSE
ROLLBACK TRAN
Use SERIALIZABLE. By definition it provides you the illusion that your transaction is the only transaction running. Be aware that this might result in blocking and deadlocking. In fact this SQL code is a classic candidate for deadlocking: Two transactions might first read a set of rows, then both will try to modify that set of rows. Locking hints are the classic way of solving that problem. Retry also works.
As stated in the comment. Why are you trying to insert on multiple threads? You cannot write to a table faster on multiple threads.
But you don't need a declare
insert into [Table_1] (ID, fname, lname)
select 3, 'fname', 'lname'
from [Table_1]
where ID = 3
having COUNT(*) <= 10
If you need to take a lock then do so
The data is not 3NF
Should start any design with a proper data model
Why rule out table lock?
That could very well be the best approach
Really, what are the chances?
Even without a lock you would have to have two at a count of 9 submit at exactly the same time. Even then it would stop at 11. Is the 10 an absolute hard number?
If I try to execute the following code, I get the errors
Msg 207, Level 16, State 1, Line 3 Invalid column name 'Another'. Msg
207, Level 16, State 1, Line 4 Invalid column name 'Another'.
even though the predicate for both IF statements always evaluates to false.
CREATE TABLE #Foo (Bar INT)
GO
IF (1=0)
BEGIN
SELECT Another FROM #Foo
END
GO
IF (1=0)
BEGIN
ALTER TABLE #Foo ADD Another INT
SELECT Another FROM #Foo
END
GO
DROP TABLE #Foo
This is probably over-simplified for the sake of the example; in reality what I need to do is select the values from a column, but only if the column exists. If it doesn't exist, I don't care about it. In the problem that drove me to ask this question, my predicate was along the lines of EXISTS (SELECT * FROM sys.columns WHERE object_id = #ID AND name = #Name). Is there a way to achieve this without resorting to my arch-enemy Dynamic SQL? I understand that my SQL must always be well-formed (i.e. conform to grammar) - even within a block that's never executed - but I'm flabbergasted that I'm also being forced to make it semantically correct too!
EDIT:
Though I'm not sure the code below adds much to the code above, it's a further example of the problem. In this scenario, I only want to set the value of Definitely (which definitely exists as a column) with the value from Maybe (which maybe exists as a column) if Maybe exists.
IF EXISTS (SELECT * FROM sys.columns WHERE object_id = OBJECT_ID('dbo.TableName', 'U') AND name = 'Maybe')
BEGIN
UPDATE dbo.TableName SET Definitely = Maybe
END
SQL Server doesn't execute line by line. It isn't procedural like .net or Java code. So there is no "non-executed block"
The batch is compiled in one go. At this point, the column doesn't exist but it knows the table will be. Table does not have a column called "Another". Fail.
Exactly as expected.
Now, what is the real problem you are trying to solve?
Some options:
2 tables or one table with both columns
use Stored procedures to decouple scope
not use temp tables (maybe not needed; it could be your procedural thinking...)
dynamic SQL (from Mitch's deleted answer)
Edit, after comment;
Why not hide schema changes behind a view, rather than changing all code to work with columns that may/may not be there?
You can use EXEC to handle it. It's not really dynamic SQL if the code never actually changes.
For example:
CREATE TABLE dbo.Test (definitely INT NOT NULL)
INSERT INTO dbo.Test (definitely) VALUES (1), (2), (3)
IF EXISTS (SELECT *
FROM sys.columns
WHERE object_id = OBJECT_ID('dbo.Test', 'U') AND
name = 'Maybe')
BEGIN
EXEC('UPDATE dbo.Test SET definitely = maybe')
END
SELECT * FROM dbo.Test
ALTER TABLE dbo.Test ADD maybe INT NOT NULL DEFAULT 999
IF EXISTS (SELECT *
FROM sys.columns
WHERE object_id = OBJECT_ID('dbo.Test', 'U') AND
name = 'Maybe')
BEGIN
EXEC('UPDATE dbo.Test SET definitely = maybe')
END
SELECT * FROM dbo.Test
DROP TABLE dbo.Test
You can also try Martin Smith's "Workaround" using a non-existing table to get "deferred name resolution" for columns.
I had the same issue.
We are creating a script for all changes for years and this is the first time that we have this issue.
I've tried all your answers and didn't find the issue.
In my case it was because of temporary table within the script that I'm using also within a stored procedure, although every sentence has go.
I've found that if I'm adding if exists with drop to the temporary table after the script is using the temporary table, it is working correctly.
Best regards,
Chen
Derived from the answer by #gbn.
What i did to solve the issue was to use 'GO' between the ALTER query and the query that uses the column added by ALTER. This will make the 2 queries to be run as separate batches thereby ensuring your 'Another' column is there before the SELECT query.
I had a task -- to create update trigger, that works on real table data change (not just update with the same values). For that purpose I had created copy table then began to compare updated rows with the old copied ones. When trigger completes, it's neccessary to actualize the copy:
UPDATE CopyTable SET
id = s.id,
-- many, many fields
FROM MainTable s WHERE s.id IN (SELECT [id] FROM INSERTED)
AND CopyTable.id = s.id;
I don't like to have this ugly code in the trigger anymore, so I have extracted it to a stored procedure:
CREATE PROCEDURE UpdateCopy AS
BEGIN
UPDATE CopyTable SET
id = s.id,
-- many, many fields
FROM MainTable s WHERE s.id IN (SELECT [id] FROM INSERTED)
AND CopyTable.id = s.id;
END
The result is -- Invalid object name 'INSERTED'. How can I workaround this?
Regards,
Leave the code in the trigger. INSERTED is a pseudo-table only available in the trigger code. Do not try to pass around this pseudo-table values, it may contain a very large number of entries.
This is T-SQL, a declarative data access language. It is not your run-of-the-mill procedural programming language. Common wisdom like 'code reuse' does not apply in SQL and it will only cause you performance issues. Leave the code in the trigger, where it belongs. For ease of re-factoring, generate triggers through some code generation tool so you can easily refactor the triggers.
The problem is that INSERTED is only available during the trigger
-- Trigger changes to build list of id's
DECLARE #idStack VARCHAR(max)
SET #idStack=','
SELECT #idStack=#idStack+ltrim(str(id))+',' FROM INSERTED
-- Trigger changes to call stored proc
EXEC updateCopy(#idStack)
-- Procedure to take a comma separated list of id's
CREATE PROCEDURE UpdateCopy(#IDLIST VARCHAR(max)) AS
BEGIN
UPDATE CopyTable SET
id = s.id,
-- many, many fields
FROM MainTable s WHERE charindex(','+ltrim(str(s.id))+',',#idList) > 0
AND CopyTable.id = s.id;
END
Performance will not be great, but it should allow you to do what you want.
Just typed in on the fly, but should run OK
The real question is "How to pass array of GUIDs in a stored procedure?" or, more wide, "How to pass an array in a stored procedure?".
Here is the answers:
http://www.sommarskog.se/arrays-in-sql-2005.html
http://www.sommarskog.se/arrays-in-sql-2008.html
I want to check if SQL Server(2000/05/08) has the ability to write a nested stored procedure, what I meant is - WRITING a Sub Function/procedure inside another stored procedure. NOT calling another SP.
Why I was thinking about it is- One of my SP is having a repeated lines of code and that is specific to only this SP.So if we have this nested SP feature then I can declare another sub/local procedure inside my main SP and put all the repeating lines in that. and I can call that local sp in my main SP. I remember such feature is available in Oracle SPs.
If SQL server is also having this feature, can someone please explain some more details about it or provide a link where I can find documentation.
Thanks in advance
Sai
I don't recommend doing this as each time it is created a new execution plan must be calculated, but YES, it definitely can be done (Everything is possible, but not always recommended).
Here is an example:
CREATE PROC [dbo].[sp_helloworld]
AS
BEGIN
SELECT 'Hello World'
DECLARE #sSQL VARCHAR(1000)
SET #sSQL = 'CREATE PROC [dbo].[sp_helloworld2]
AS
BEGIN
SELECT ''Hello World 2''
END'
EXEC (#sSQL)
EXEC [sp_helloworld2];
DROP PROC [sp_helloworld2];
END
You will get the warning
The module 'sp_helloworld' depends on the missing object 'sp_helloworld2'.
The module will still be created; however, it cannot run successfully until
the object exists.
You can bypass this warning by using EXEC('sp_helloworld2') above.
But if you call EXEC [sp_helloworld] you will get the results
Hello World
Hello World 2
It does not have that feature. It is hard to see what real benefit such a feature would provide, apart from stopping the code in the nested SPROC from being called from elsewhere.
Oracle's PL/SQL is something of a special case, being a language heavily based on Ada, rather than simple DML with some procedural constructs bolted on. Whether or not you think this is a good idea probably depends on your appetite for procedural code in your DBMS and your liking for learning complex new languages.
The idea of a subroutine, to reduce duplication or otherwise, is largely foreign to other database platforms in my experience (Oracle, MS SQL, Sybase, MySQL, SQLite in the main).
While the SQL-building proc would work, I think John's right in suggesting that you don't use his otherwise-correct answer!
You don't say what form your repeated lines take, so I'll offer three potential alternatives, starting with the simplest:
Do nothing. Accept that procedural
SQL is a primitive language lacking
so many "essential" constructs that
you wouldn't use it at all if it
wasn't the only option.
Move your procedural operations outside of the DBMS and execute them in code written in a more sophisticated language. Consider ways in which your architecture could be adjusted to extract business logic from your data storage platform (hey, why not redesign the whole thing!)
If the repetition is happening in DML, SELECTs in particular, consider introducing views to slim down the queries.
Write code to generate, as part of your build process, the stored procedures. That way if the repeated lines ever need to change, you can change them in one place and automatically generate the repetition.
That's four. I thought of another one as I was typing; consider it a bonus.
CREATE TABLE #t1 (digit INT, name NVARCHAR(10));
GO
CREATE PROCEDURE #insert_to_t1
(
#digit INT
, #name NVARCHAR(10)
)
AS
BEGIN
merge #t1 AS tgt
using (SELECT #digit, #name) AS src (digit,name)
ON (tgt.digit = src.digit)
WHEN matched THEN
UPDATE SET name = src.name
WHEN NOT matched THEN
INSERT (digit,name) VALUES (src.digit,src.name);
END;
GO
EXEC #insert_to_t1 1,'One';
EXEC #insert_to_t1 2,'Two';
EXEC #insert_to_t1 3,'Three';
EXEC #insert_to_t1 4,'Not Four';
EXEC #insert_to_t1 4,'Four'; --update previous record!
SELECT * FROM #t1;
What we're doing here is creating a procedure that lives for the life of the connection and which is then later used to insert some data into a table.
John's sp_helloworld does work, but here's the reason why you don't see this done more often.
There is a very large performance impact when a stored procedure is compiled. There's a Microsoft article on troubleshooting performance problems caused by a large number of recompiles, because this really slows your system down quite a bit:
http://support.microsoft.com/kb/243586
Instead of creating the stored procedure, you're better off just creating a string variable with the T-SQL you want to call, and then repeatedly executing that string variable.
Don't get me wrong - that's a pretty bad performance idea too, but it's better than creating stored procedures on the fly. If you can persist this code in a permanent stored procedure or function and eliminate the recompile delays, SQL Server can build a single execution plan for the code once and then reuse that plan very quickly.
I just had a similar situation in a SQL Trigger (similar to SQL procedure) where I basically had same insert statement to be executed maximum 13 times for 13 possible key values that resulted of 1 event. I used a counter, looped it 13 times using DO WHILE and used CASE for each of the key values processing, while kept a flag to figure out when I need to insert and when to skip.
it would be very nice if MS develops GOSUB besides GOTO, an easy thing to do!
Creating procedures or functions for "internal routines" polute objects structure.
I "implement" it like this
BODY1:
goto HEADER HEADER_RET1:
insert into body ...
goto BODY1_RET
BODY2:
goto HEADER HEADER_RET2:
INSERT INTO body....
goto BODY2_RET
HEADER:
insert into header
if #fork=1 goto HEADER_RET1
if #fork=2 goto HEADER_RET2
select 1/0 --flow check!
I too had need of this. I had two functions that brought back case counts to a stored procedure, which was pulling a list of all users, and their case counts.
Along the lines of
select name, userID, fnCaseCount(userID), fnRefCount(UserID)
from table1 t1
left join table2 t2
on t1.userID = t2.UserID
For a relatively tiny set (400 users), it was calling each of the two functions one time. In total, that's 800 calls out from the stored procedure. Not pretty, but one wouldn't expect a sql server to have a problem with that few calls.
This was taking over 4 minutes to complete.
Individually, the function call was nearly instantaneous. Even 800 near instantaneous calls should be nearly instantaneous.
All indexes were in place, and SSMS suggested no new indexes when the execution plan was analyzed for both the stored procedure and the functions.
I copied the code from the function, and put it into the SQL query in the stored procedure. But it appears the transition between sp and function is what ate up the time.
Execution time is still too high at 18 seconds, but allows the query to complete within our 30 second time out window.
If I could have had a sub procedure it would have made the code prettier, but still may have added overhead.
I may next try to move the same functionality into a view I can use in a join.
select t1.UserID, t2.name, v1.CaseCount, v2.RefCount
from table1 t1
left join table2 t2
on t1.userID = t2.UserID
left join vwCaseCount v1
on v1.UserID = t1.UserID
left join vwRefCount v2
on v2.UserID = t1.UserID
Okay, I just created views from the functions, so my execution time went from over 4 minutes, to 18 seconds, to 8 seconds. I'll keep playing with it.
I agree with andynormancx that there doesn't seem to be much point in doing this.
If you really want the shared code to be contained inside the SP then you could probably cobble something together with GOTO or dynamic SQL, but doing it properly with a separate SP or UDF would be better in almost every way.
For whatever it is worth, here is a working example of a GOTO-based internal subroutine. I went that way in order to have a re-useable script without side effects, external dependencies, and duplicated code:
DECLARE #n INT
-- Subroutine input parameters:
DECLARE #m_mi INT -- return selector
-- Subroutine output parameters:
DECLARE #m_use INT -- instance memory usage
DECLARE #m_low BIT -- low memory flag
DECLARE #r_msg NVARCHAR(max) -- low memory description
-- Subroutine internal variables:
DECLARE #v_low BIT, -- low virtual memory
#p_low BIT -- low physical memory
DECLARE #m_gra INT
/* ---------------------- Main script ----------------------- */
-- 1. First subroutine invocation:
SET #m_mi = 1 GOTO MemInfo
MI1: -- return here after invocation
IF #m_low = 1 PRINT '1:Low memory'
ELSE PRINT '1:Memory OK'
SET #n = 2
WHILE #n > 0
BEGIN
-- 2. Second subroutine invocation:
SET #m_mi = 2 GOTO MemInfo
MI2: -- return here after invocation
IF #m_low = 1 PRINT '2:Low memory'
ELSE PRINT '2:Memory OK'
SET #n = #n - 1
END
GOTO EndOfScript
MemInfo:
/* ------------------- Subroutine MemInfo ------------------- */
-- IN : #m_mi: return point: 1 for label MI1 and 2 for label MI2
-- OUT: #m_use: RAM used by isntance,
-- #m_low: low memory condition
-- #r_msg: low memory message
SET #m_low = 1
SELECT #m_use = physical_memory_in_use_kb/1024,
#p_low = process_physical_memory_low ,
#v_low = process_virtual_memory_low
FROM sys.dm_os_process_memory
IF #p_low = 1 OR #v_low = 1 BEGIN
SET #r_msg = 'Low memory.' GOTO LOWMEM END
SELECT #m_gra = cntr_value
FROM sys.dm_os_performance_counters
WHERE counter_name = N'Memory Grants Pending'
IF #m_gra > 0 BEGIN
SET #r_msg = 'Memory grants pending' GOTO LOWMEM END
SET #m_low = 0
LOWMEM:
IF #m_mi = 1 GOTO MI1
IF #m_mi = 2 GOTO MI2
EndOfScript:
Thank you all for your replies!
I'm better off then creating one more SP with the repeating code and call that, which is the best way interms of performance and look wise.