I know this is redundant, but I'd like to Call Query from another Query. I know I can just add it to first one, but the scripts are getting long and at times I don't want to run all of the queries.
I've been looking and my best guess is maybe just using command shell. I was just wondering if there was another way.
Declare #CommandDos VarChar(150) = 'sqlcmd -E -S Server-i h:\SQL\SomeThing.sql'
EXEC master..xp_cmdshell #CommandDos
Code re-use.
Perhaps use functions, i.e. put the query you want called into a function.
Functions can be Scalar, Table-valued, Deterministic, or Nondeterministic.
Maybe you can create stored procedures with the queries, then call them inside another one if needed.
What do you think about it?
Related
I need to insert values from a table into a sproc. For example:
exec mysproc #param1='col1', #param2='col2'
This can be done using a cursor but is there some way to do it via a set operation?
It is not possible to invoke an sproc as part of a "set operation". Probably, the reason for that is that the sproc might have arbitrary side-effects like modifying data, sending additional result sets (!) or shutting down the server.
A cursor is the canonical approach to this. (Alas.)
You could modify the sproc to take a TVP, of course. Not sure if that is workable for you.
I imagine that the method you choose would be based on the amount of time you have available and it's difficult to say which of these methods is most time consuming without being more intimate with the logic.
There are a few approaches to this problem.
As Robert Harvey has alluded to, you should maybe look at maybe
modifying the proc to accept a table valued parameter (if you are
using SQL Server 2008 upwards). If not, you could create a scalar
XML parameter that is "decoded" in to a table inside the proc.
Populate a #table with your "parameter data" and a ROW_NUMBER() and
use a WHILE loop to call the proc for each row in your #table.
Create a CURSOR (I hate giving CURSOR advice) of type FAST_FORWARD
and iteratively call the procedure.
Dynamic SQL; build up a SQL command string using EXEC or preferably
SP_EXECUTESQL.
My opinion is that first prize would be to re-engineer the proc to
accept parameter filters. Going on the assumption that the dataset
you wish to create parameters from is the result of a filtered
query:
SELECT Moo, Meow
FROM Woof
WHERE Fu = #ParmX
AND Bar = #ParmY
Your proc should be called with #ParmX, #ParmY and the logic inside would then proceed in a set based manner.
I need to access variables in whole script that is divided to few sections by GO command. How to do that?
Below is source example (that doesn't work):
--script info (begining of script file)
DECLARE #ScriptCode NVARCHAR (20) = '20120330-01'
--some queries
GO
--and there I cannot use #ScriptCode variable
INSERT INTO DBScriptsHistory(ScriptCode) VALUES(#ScriptCode)
That's correct, because variables exist only within the current batch. Assuming that you need to use the GO statement (e.g. CREATE VIEW must be the first statement in a batch), the simplest solution is probably to use sqlcmd scripting variables.
The GO statement is a batch separator for the different SQL client tools - it is not part of SQL.
Each batch is separate from the other - just remove the GO statement.
See GO (Tranasct-SQL):
The scope of local (user-defined) variables is limited to a batch, and cannot be referenced after a GO command.
Remove the GO keyword, it will work. Once you have called GO, the variable is no longer in scope.
I am writing a Trans-SQL script against a MSSQL 2005 Server that intends to query file path of each database present. I am able to list out the database present in the system. But how do I run a separate query based on the results?
The following is the output from the list of databases using the command (SELECT name from sys.databases):
name
----
master
tempdb
model
msdb
Now i would like to take this database names (e.g. master, tempdb) and enter into another query namely (exec sp_helpdb <database_name>).
any ideas?
Not answering your question directly, but if you want to run a query for each db, you can use sp_msforeachdb.
sp_msforeachdb 'EXEC sp_helpdb [?]'
Otherwise, you're going to need to use the results to generate your SQL.
You can build a cursor based on that query then loop through the results, stuff them into a SQL variable, and use that variable to exec your sproc. Unfortunately I'm not able to give you a sample right now, but that is the way I would approach it.
In general, the answer to your question would be "use a subquery".
But in this case, you're using a SQL Server stored procedure. So the best approach is to write your own stored procedure to:
1) call sp_helpdb (or select from master..sysdatabases)
2) Iterate through the results
Here's an example:
http://www.mssqltips.com/sqlservertip/1070/simple-script-to-backup-all-sql-server-databases/
If i am understanding correct , you can use derived table here:-
select database.name (your query) from
(SELECT name from sys.databases) database
I have worked on SQL stored procedures and I have noticed that many people use two different approaches -
First, to use select queries i.e. something like
Select * from TableA where colA = 10 order by colA
Second, is to do the same by constructing a query i.e. like
Declare #sqlstring varchar(100)
Declare #sqlwhereclause varchar(100)
Declare #sqlorderby varchar(100)
Set #sqlstring = 'Select * from TableA '
Set #sqlwhereclause = 'where colA = 10 '
Set #sqlorderby = 'order by colA'
Set #sqlstring = #sqlstring + #sqlwhereclause + #sqlorderby
exec #sqlstring
Now, I know both work fine. But, the second method I mentioned is a little annoying to maintain.
I want to know which one is better? Is there any specific reason one would resort to one method over the other? Any benefits of one method over other?
Use the first one. This will allow a query plan to be cached properly, apart from being the way you are supposed to work with SQL.
The second one is open to SQL Injection attacks, apart from the other issues.
With the dynamic SQL you will not get compile time checking, so it may fail only when invoked (the sooner you know about incorrect syntax, the better).
And, you noted yourself, the maintenance burden is also higher.
The second method has the obvious drawback of not being syntax checked at compile time. It does however allow a dynamic order by clause, which the first does not. I recommend that you always use the first example unless you have a very good reason to make the query dynamic. And, as #Oded has already pointed out, be sure to guard yourself against sql injection if you do go for the second approach.
I don't have a full comprehensive answer for you, but I can tell you right now that the latter method is much more difficult to work with when importing the stored procedure as a function in an ORM. Since the SQL is constructed dynamically, you have to manually create any type-classes that are returned from the stored procedure that aren't directly correlated to entities in your model.
With that in mind, there are times where you simply can't avoid constructing a SQL statement, especially when where clauses and joins depend on the parameters passed in. In my experience, I have found that stored procs that are creating large, variably joined/whered statements for EXECs are trying to do too many things. In these situations, I would recommend you keep the Single Responsibility Principle in mind.
Executing dynamic SQL inside a stored procedure reduces the value of using stored procedures to just a saved query container. Stored procedures are mostly beneficial in that the query execution plan (a very costly operation) is compiled and stored in memory the first time the procedure is executed. This means that every subsequent execution of the procedure is bypassing the query plan calculations, and jumping right to the data retrieval portiion of the operation.
Also, allowing a stored procedure to take an executable query string as a parameter is dangerous. Anyone with execute permission on granted on the procedure could potentially cause havoc on the rest of the database.
I am using SQL Server 2008 Enterprise on Windows Server 2008 Enterprise. In a stored procedure, we can execute a SELECT statement directly. And it could also be executed in this new way, I am wondering which method is better, and why?
New method,
declare #teststatement varchar(500)
set #teststatement = 'SELECT * from sometable'
print #teststatement
exec (#teststatement)
Traditional method,
SELECT * from sometable
regards,
George
FYI: it’s not a new method, it is known as Dynamic SQL.
Dynamic SQL are preferred when we need to set or concatenate certain values into sql statements.
Traditional or normal way sql statements are recommended, because stored procedures are complied. Complied on first run "Stored Procedure are Compiled on First Run"
, execution plan of statements are being created at the time of compilation.
Dynamic sqls are ignored while creating execution plans, because it is taken as string (VARCHAR or NVARCHAR as declared).
Refer following articles for more details about dynamic query and stored procs
Introduction to Dynamic SQL Part 1
Introduction to Dynamic SQL Part 2
Everything you wanted to know about Stored Procedures
The traditional method is safer, because the query is parsed when you save it. The query in the 'exec' method is not parsed and can contain errors.
The "new" way, as mentioned, has nothing to do with SQL 2008. EXEC has been available for quite some time. It's also - in most cases - a Very Bad Idea.
You lose parameterization - meaning you are now vulnerable to SQL Injection. It's ugly and error-prone. It's less efficient. And it creates a new execution scope - meaning it can't share variables, temp tables, etc. - from it's calling stored proc.
sp_executesql is another (and preferred) method of executing dynamic SQL. It's what your client apps use, and it supports parameters - which fixes the most glaring problem of EXEC. However, it too has very limited use cases within a stored proc. About the only redeeming use is when you need a dynamic table or column name. T-SQL does not support a variable for that - so you need to use sp_executesql. The number of times you need or should be doing that are very low.
Bottom line - you'd be best off forgetting you ever heard of it.