How to access variables in whole SQL script - sql

I need to access variables in whole script that is divided to few sections by GO command. How to do that?
Below is source example (that doesn't work):
--script info (begining of script file)
DECLARE #ScriptCode NVARCHAR (20) = '20120330-01'
--some queries
GO
--and there I cannot use #ScriptCode variable
INSERT INTO DBScriptsHistory(ScriptCode) VALUES(#ScriptCode)

That's correct, because variables exist only within the current batch. Assuming that you need to use the GO statement (e.g. CREATE VIEW must be the first statement in a batch), the simplest solution is probably to use sqlcmd scripting variables.

The GO statement is a batch separator for the different SQL client tools - it is not part of SQL.
Each batch is separate from the other - just remove the GO statement.
See GO (Tranasct-SQL):
The scope of local (user-defined) variables is limited to a batch, and cannot be referenced after a GO command.

Remove the GO keyword, it will work. Once you have called GO, the variable is no longer in scope.

Related

Variable in Stored Procedure

When I execute one of my stored procedures manually, I have to populate several variables.
Most of the variables don't change each time it is run; is it possible to pre-populate the "Value" box so that it only needs to be changed when necessary?
I am reluctant to hard code in the script as there was a series of interlinked procedures which I need to keep dynamic
I'm going to go out on a limb here and guess that you're talking about SQL Server, and that you're executing your procedure through SSMS, because of your description of the graphical interface. In the future, please tag your question with the specific database platform that the question pertains to, and try to be responsive to early comments. You'll get answers much, much faster. (If I'm wrong, just undo the tagging I added to your question.)
Although stored procedures can contain variables, what you're talking about here are parameters; values that are passed into the procedure from the calling code or application.
Parameters can be defined with default values in their declarations.
CREATE OR ALTER PROCEDURE dbo.SomeProc (
#SomeBigIntegerValue bigint = 42
)
AS...
When default values exist, the parameter becomes optional for the caller. The procedure can now be called with or without explicit parameters. Either of these will run.
EXECUTE dbo.SomeProc;
EXECUTE dbo.SomeProc
#SomeBigIntegerValue = 37;
In the first instance, the procedure will use the default value, 42. In the second instance, it will use the parameter value, 37.
You'll note that I named the parameter in the call. That's a best practice, generally, to avoid confusion, but it also allows you to send the parameters in any order. If you don't name them, they will be interpreted in the order they're declared, so you run all manner of risks there.
If you choose to execute the procedure through the GUI, the default values won't be pre-populated, but you can see which parameters have defaults and which don't by expanding the Parameters tab under the procedure name in SSMS. I couldn't find an example with defaults, but it'll looks something like this:
If you want the procedure to use the default value, just tick the Pass Null Value check box.
(In case you're wondering, we have a truncate proc so that our ETL service accounts can have scaled back permissions without having to do fully-logged, row-by-row deletions...)

T-SQL equivalent of GO

I'm trying to write a T-SQL script to create a database and the corresponding tables. I'm having a problem where the USE statement complains that the database that I just "created" doesn't exist. If I run the script within SQL Server Management Studio so that I can make use of the GO statement, I don't get this issue.
Is there a T-SQL equivalent of GO that I can use to make sure the CREATE DATABASE gets executed before the USE?
I've tried BEGIN/COMMIT TRANSACTION and BEGIN/END but they didn't help.
Is there a T-SQL equivalent of GO that I can use to make sure the CREATE DATABASE gets executed before the USE?
Yes. Dynamic SQL. Each dynamic SQL invocation is a parsed, compiled, and executed as a separate batch.
EG:
exec ('
create database foo
')
exec ('
use foo
create table bar(id int)
')
Note that when used in dynamic SQL use database only change the database context for the dynamic batch. When control returns to the calling batch, the database context is restored.
In C# you should use separate calls to SqlComand for each batch.
High level steps.
Open connection to master.
Create new database (just create database statement).
Instead of USE call SqlConnection.ChangeDatabase(String) Method
Execute remaining batches

Call SQL Query from another SQL Server Query (Management Studio)

I know this is redundant, but I'd like to Call Query from another Query. I know I can just add it to first one, but the scripts are getting long and at times I don't want to run all of the queries.
I've been looking and my best guess is maybe just using command shell. I was just wondering if there was another way.
Declare #CommandDos VarChar(150) = 'sqlcmd -E -S Server-i h:\SQL\SomeThing.sql'
EXEC master..xp_cmdshell #CommandDos
Code re-use.
Perhaps use functions, i.e. put the query you want called into a function.
Functions can be Scalar, Table-valued, Deterministic, or Nondeterministic.
Maybe you can create stored procedures with the queries, then call them inside another one if needed.
What do you think about it?

Can the use or lack of use of "GO" in T-SQL scripts effect the outcome?

We have an SSIS package that ran in production on a SQL 2008 box with a 2005 compatibility setting. The package contains a SQL Task and it appears as though the SQL at the end of the script did not run.
The person who worked on that package noted before leaving the company that the package needed "GOs" between the individual SQL commands to correct the issue. however, when testing in development on SQL Server 2008 with 2008 compatibility, the package worked fine.
From what I know, GO's place commands in batches, where commands are sent to the database provider in a batch, for efficiency's sake. I am thinking that the only way that GO should effect the outcome is if there was an error in that script somewhere above it. I can imagine GO in that case, and only that case, effecting the outcome. However, we have seen no evidence of any errors logged.
Can someone suggest to me whether or not GO is even likely related to the problem? Assuming no error was encountered, my understanding of the "GO" command suggests that it use or lack of use is most likely unrelated to the problem.
The GO keyword is, as you say, a batch separator that is used by the SQL Server management tools. It's important to note, though, that the keyword itself is parsed by the client, not the server.
Depending on the version of SQL Server in question, some things do need to be placed into distinct batches, such as creating and using a database. There are also some operations that must take place at the beginning of a batch (like the use statement), so using these keywords means that you'll have to break the script up into batches.
A couple of things to keep in mind about breaking a script up into multiple batches:
When an error is encountered within a batch, execution of that batch stops. However, if your script has multiple batches, an error in one batch will only stop that batch from executing; subsequent batches will still execute
Variables declared within a batch are available to that batch only; they cannot be used in other batches
If the script is performing nothing but CRUD operations, then there's no need to break it up into multiple batches unless any of the above behavioral differences is desired.
All of your assumptions are correct.
One thing that I've experienced is that if you have a batch of statements that is a pre-requisite for another batch, you may need to separate them with a GO. One example may be if you add a column to a table and then update that column (I think...). But if it's just a series of DML queries, then the absence or presence of GO shouldn't matter.
I've noticed that if you set up any variables in the script their state (and maybe the variables themselves) are wiped after a 'GO' statement so they can't be reused. This was certainly the case on SQL Server 2000 and I presume it will be the case on 2005 and 2008 as well.
Yes, GO can affect outcome.
GO between statements will allow execution to continue if there is an error in between. For example, compare the output of these two scripts:
SELECT * FROM table_does_not_exist;
SELECT * FROM sys.objects;
...
SELECT * FROM table_does_not_exist;
GO
SELECT * FROM sys.objects;
As others identified, you may need to issue GO if you need changes applied before you work on them (e.g. a new column) but you can't persist local or table variables across GO...
Finally, note that GO is not a T-SQL keyword, it is a batch separator. This is why you can't put GO in the middle of a stored procedure, for example ... SQL Server itself has no idea what GO means.
EDIT however one answer stated that transactions cannot span batches, which I disagree with:
CREATE TABLE #foo(id INT);
GO
BEGIN TRANSACTION;
GO
INSERT #foo(id) SELECT 1;
GO
SELECT ##TRANCOUNT; -- 1
GO
COMMIT TRANSACTION;
GO
DROP TABLE #foo;
GO
SELECT ##TRANCOUNT; -- 0

Using table just after creating it: object does not exist

I have a script in T-SQL that goes like this:
create table TableName (...)
SET IDENTITY INSERT TableName ON
And on second line I get error:
Cannot find the object "TableName" because it does not exist or you do not have permissions.
I execute it from Management Studio 2005. When I put "GO" between these two lines, it's working. But what I would like to acomplish is not to use "GO" because I would like to place this code in my application when it will be finished.
So my question is how to make this work without using "GO" so that I can run it programmatically from my C# application.
Without using GO, programmatically, you would need to make 2 separate database calls.
Run the two scripts one after the other - using two calls from your application.
You should only run the second once the first has successfully run anyway, so you could run the first script and on success run the second script. The table has to have been created before you can use it, which is why you need the GO in management studio.
From the BOL: "SQL Server utilities interpret GO as a signal that they should send the current batch of Transact-SQL statements to SQL Server". Therefore, as Jose Basilio already pointed out, you have to make separate database calls.
If this can help, I was faced with the same problem and I had to write a little (very basic) parser to split every single script in a bunch of mini-script which are sent - one at a time - to the database.
something even better than tpdi's temp table is a variable table. they run lightning fast and are dropped automatically once out of scope.
this is how you make one
declare #TableName table (ColumnName int, ColumnName2 nvarchar(50))
then to insert you just do this
insert into #TableName (ColumnName, ColumnName2)
select 1, 'A'
Consider writing a stored proc that creates a temporary table and does whatever it needs to with that. If you create a real table, your app won't be able to run the script more than once, unless it also drops the table -- in which case, you have exactly the functionality of a temp table.