I have a Power-shell script that calls a SQL script. This is currently working, but inside my sql script I have some hard coded parameters that I would like to pass to the SQL script via the powershell.
So this is the snip-it from the Power-shell script
function ExecSqlScript([string] $scriptName)
{
$scriptFile = $script:currentDir + $scriptName
$sqlLog = $script:logFileDir + $scriptName + "_{0:yyyyMMdd_HHmmss}.log" -f (Get-Date)
$result = sqlcmd -S uk-ldn-dt270 -U sa -P passwordhere3! -i $scriptFile -b | Tee-Object - filepath $sqlLog
if ($result -like "*Msg *, Level *, State *" -Or $result -like "*Sqlcmd: Error:*")
{
throw "SQL script " + $scriptFile + " failed: " + $result
}
}
try
{
ExecSqlScript "restoreDatabase.sql"
}
catch
{
//Some Error handling here
}
And this is from the SQL
USE MASTER
GO
DECLARE #dbName varchar(255)
SET #dbName = 'HardCodedDatabaseName'
So I want to pass the value for dbName, any ideas?
You could take advantage of sqlcmd's scripting variables. Those can be used in script file and are marked with $(). Like so,
-- Sql script file
use $(db);
select someting from somewhere;
When calling sqlcmd, use the -v parameter to assign variables. Like so,
sqlcmd -S server\instance -E -v db ="MyDatabase" -i s.sql
Edit
Mind the Sql syntax when setting variables. Consider the following script:
DECLARE #dbName varchar(255)
SET #dbName = $(db)
select 'val' = #dbName
As passed to the Sql Server, it looks like so (Profiler helps here):
use master;
DECLARE #dbName varchar(255)
SET #dbName = foo
select 'val' = #dbName
This is, obviously invalid a syntax, as SET #dbName = foo won't make much sense. The value ought to be within single quotes like so,
sqlcmd -S server\instance -E -v db ="'foo'" -i s.sql
Just in case someone else needs to do this... here is a working example.
Power Shell Script:
sqlcmd -S uk-ldn-dt270 -U sa -P 1NetNasdf£! -v db = "'DatabaseNameHere'" -i $scriptFile -b | Tee-Object -filepath $sqlLog
Note the -v switch to assign the variables
And here is the MS SQL:
USE MASTER
GO
if db_id($(db)) is null
BEGIN
EXEC('
RESTORE DATABASE ' + $(db) + '
FROM DISK = ''D:\DB Backup\EmptyLiveV5.bak''
WITH MOVE ''LiveV5_Data'' TO ''C:\Program Files (x86)\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\LiveV5_' + $(db) + '.MDF'',
MOVE ''LiveV5_Log'' To ''C:\Program Files (x86)\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\LiveV5_' + $(db) + '_log.LDF'', REPLACE,
STATS =10')
END
Note: You do not have to assign the scripting varible to a normal sql varible like this.
SET #dbName = $(db)
you can just use it in your sql code. - Happy coding.
Here's a full example using a different PowerShell approach. Here I'm using a specific script to reset local databases for a clean development environment.
## Reset the Database
$resetScript= "C:\ResetSite\resetDatabases.sql"
Write-Host "Resetting the DB - Running $($resetScript)"
$connectionString = "Server = localhost; Database = 'master'; UID = myusername; PWD = mypassword"
# Create variables & params
$sqlCmdVariables = #(
"Database=$($siteConfig.db_name)",
"UserName=$($siteConfig.db_username)",
"UserPassword=$($siteConfig.db_user_password)"
)
$sqlCmdParameters = #{
InputFile = $resetScript
QueryTimeout = 1800
ConnectionString = $connectionString
Variable = $sqlCmdVariables
}
# Invoke
Invoke-SqlCmd #sqlCmdParameters
The .sql file then uses the parameters passed in, the same way #nmbell mentions.
-- Declare the vars
DECLARE #Database nvarchar(100), #UserName nvarchar(100), #UserPassword nvarchar(100)
-- Set the vars
SET #Database = '$(Database)' -- database name
SET #UserName = '$(UserName)' -- SQL login and database username
SET #UserPassword = '$(UserPassword)' -- login password
... more stuff here.. use the vars like normal
This is partly derived from this blog post but modified slightly to use a file rather than an inline query.
Adjusting vonPryz's answer to use:
SET #dbName = '$(db)'
means you can pass in the parameter from the command line in a more natural form as
sqlcmd -S server\instance -E -v db ="foo" -i s.sql
The SqlCmd variable still substitutes correctly.
I know this is an old answer but I do have a better way that is much easier if you only need a small amount of data from Powershell (or even a large amount as long as all you want is text), and you work mainly in SQL for your scripting like I do:
1: Start the PowerShell from SQL using xp_cmdshell, and insert the results to a one-column table which allows NULLs e.g:
DECLARE #Results (Line varchar(1000) NULL)
INSERT #Results
EXEC master.dbo.xp_cmdshell '"powershell.exe C:\PowershellScripts\MyScript.ps1 MyParams"'
2: During your PowerShell script, for anything you want to pass back to SQL, simply use "Write-Output", e.g:
Write-Output $returned_data
You can do this as many times as you want. If you have 10,000 values to pass back to SQL, then you could use write-output 10,000 times.
So in the above example once the "MyScript.ps1" PowerShell script finishes running, all of the output will be in the #Results table variable, ready to be used, queried, imported into individual variables, whatever you want really.
Related
Iam new to sqlcmd and i'm trying to execute this sql cmd code:
:Connect SERVERNAME
!!if exist $(FullBackup) del $(FullBackup)
GO
!!if exist $(TransactionLog) del $(TransactionLog)
GO
I am passing variables $(FullBackup) and $(TransactionLog) through a powershell script:
& $app -i $syncFileLocal -E -b -v FullBackup=("""$fullbackup""") TransactionLog=("""$transactionLog""");
where syncFileLocal contains the above sqlcmd command.
Somehow the execution stops after the second :Connect PROD-SQLMASTER
UPDATE:
When i use harcorded values for $(FullBackup) and $(TransactionLog)
the script seems to work. Is there anyway i could do it by passing variables through powershell?
Instead of:
FullBackup=("""$fullbackup""") TransactionLog=("""$transactionLog""")
try:
FullBackup="""$fullbackup""" TransactionLog="""$transactionLog"""
If you use (), the grouping operator, its output is passed as a separate argument, which is not what you want.
Do note, however, that even the solution above relies on PowerShell's fundamentally broken argument-passing to external programs, as of v7.0 - see this answer.
If sqlcmd is implemented properly (I don't know if it is), the right way to pass the arguments is:
FullBackup=$fullbackup TransactionLog=$transactionLog
That way, you would rely on PowerShell's on-demand, behind-the-scenes re-quoting of arguments, where if $fullbackup or $translactionLog contained spaces, the arguments would be passed as, for instance, "FullBackup=c:\path\to\backup 1" and "TransactionLog=c:\path\to\log 1"
I found a solution. I recommend using this with appropriate validations
:Connect $(ServerMaster)
DECLARE #resultBkp INT
EXEC master.dbo.xp_fileexist N'$(FullBackup)', #resultBkp OUTPUT
IF (#resultBkp = 1)
BEGIN
DECLARE #resultDeleteBkp INT
EXECUTE master.sys.xp_cmdshell '$(FullBackup)'
EXEC master.dbo.xp_fileexist N'$(FullBackup)', #resultDeleteBkp OUTPUT
IF (#resultDeleteBkp = 0)
BEGIN
PRINT 'Backup Deleted'
END
ELSE
BEGIN
SELECT ERROR_NUMBER(), ERROR_MESSAGE();
RETURN;
END
END
ELSE
BEGIN
PRINT 'Backup file not found'
END
I used the master.dbo.xp_fileexist to check whether the file exists and then used
master.sys.xp_cmdshell command to delete the file.
To enable master.sys.xp_cmdshell for the database server please use this solution:
Enable 'xp_cmdshell' SQL Server
I have tested it and it works fine when i pass the arguments via powershell.
I'm trying to execute a stored procedure (which i know works) in T-SQL that then gets those results into a CSV file and puts that file into a directory. I'm not sure how to formulate that query, exactly though. Here's what i've tried thus far to no avail:
EXECUTE CLR_ExportQueryToCSV #QueryCommand = 'execute databaseName.dbo.StoredProcedureName',
#FilePath = 'C:\Directory',
#FileName = '\FileToExport.csv',
#IncludeHeaders = 1
I realize CLR_ExportQueryToCSV doesn't exist. Is there any system stored procedure that will do what i'm wanting?
bcp "SELECT Col1,Col2,Col3 FROM MyDatabase.dbo.MyTable" queryout "D:\MyTable.csv" -c -t , -S SERVERNAME -T
docs
Unfortunately there's no generic/supported method in SQL Server to do what you're asking.
If you're simply looking for a way to dump the results of a SQL query to CSV then I'd be more inclined to either write an SSIS package to do the job or a C# console app, either of which can be scheduled.
Here's an example in C#:
static void Main(string[] args)
{
WriteQueryResultsToCsv(#"c:\SqlResults.csv",
"MyDbConnectionStringName",
"select * from MyTable where x > #x",
new SqlParameter("#x", SqlDbType.Int) {Value = 1});
}
private static void WriteQueryResultsToCsv(string csvPath, string connectionStringName, string sql, params SqlParameter[] parameters)
{
// Requires reference to System.Configuration
var connectionString = ConfigurationManager.ConnectionStrings[connectionStringName].ConnectionString;
using (var db = new SqlConnection(connectionString))
using (var cmd = new SqlCommand(sql, db))
{
db.Open();
cmd.Parameters.AddRange(parameters);
using (var dr = cmd.ExecuteReader())
using (var dw = new StreamWriter(csvPath))
using (var csv = new CsvWriter(dw)) // Requires CsvHelper package from NuGet
{
// Write column headers
for (var c = 0; c < dr.FieldCount; c++)
csv.WriteField(dr.GetName(c));
// Write data rows
while (dr.Read())
{
csv.NextRecord();
for (var c = 0; c < dr.FieldCount; c++)
{
csv.WriteField(dr.GetValue(c).ToString());
}
}
}
}
}
Invoking CMD is one way to achieve it and can automate it
Declare #sql VARCHAR(max)
declare #CsvFile NVARCHAR(500)
DECLARE #cmd NVARCHAR(4000)
set #sql = 'Exec [dbo].[Usp_CSVextract]'
set #CsvFile = 'C:\Test.csv'
SET #cmd =
'bcp '+CHAR(34)+#sql+CHAR(34)+' queryout '+CHAR(34)+#CsvFile+CHAR(34)+' -S '+##servername
+' -c -t'+CHAR(34)+','+CHAR(34)+' -r'+CHAR(34)+'\n'+CHAR(34)+' -T'
exec master.dbo.xp_cmdshell #cmd
There IS one way of doing what you're asking in SQL, but it's not neat or supported (AFAIK).
EXEC xp_cmdshell 'SQLCMD -S . -d MyDatabase -Q "select * from MyTable" -s "," -o "\\servername\output\result.csv" -W'
You can find documentation for SQLCMD here, but essentially what this does is use the xp_cmdshell SP to execute the SQLCMD command line utility on the server and execute a sql statement, piping the output to a CSV file.
The params I've used are as follows:
-S: SQL Server name - . means current server
-d: database name
-Q: run SQL query and exit
-s: column separator
-o: output file. This is relative to the SQL Server, not your PC
-W: dynamic column witdth
By default SQL Server does not allow you to run xp_cmdshell, so you may need to run the following SQL to enable it.
-- To allow advanced options to be changed.
EXEC sp_configure 'show advanced options', 1
GO
-- To update the currently configured value for advanced options.
RECONFIGURE
GO
-- To enable the feature.
EXEC sp_configure 'xp_cmdshell', 1
GO
-- To update the currently configured value for this feature.
RECONFIGURE
GO
Your best bet I have found is to use a SSIS. Create a Execute SQL Task and run your stored procedure. Then create a data flow set of tasks to move the data from a temporary data table to your CSV file.
Do a search om this web site for how to make a SSIS package. There are a few really nice examples on how to do this.
Good luck to you.
CLR_ExportQueryToCSV is a clr stored procedure located at codeplex.
you need to install project at clr integration enabled sql server
First of all, can i pass parameters for Path in BCP command?
I wrote this query in Exec SQL task
EXEC xp_cmdshell 'bcp "SELECT * FROM TLC.dbo.World_Hosts" queryout `"C:\Users\akshay.kapila\Desktop\TLC\Foreachlearn\Dest\?.txt" -T -c -t '`
I have given ? in path. I want specific countries in place of that. They are held in variable "Country".I am using Foreach loop which creates rather it should create a file ex Aus.txt,In.txt everytime loop runs for that specific value.
Can i use this way. If not, how can i pass variable value to Path in BCP command?
You can use variable as the SQLSourceType in your Execute SQL Task.
Create a variable to hold your bcp command, it may look like:
"EXEC xp_cmdshell 'bcp \"SELECT ''abc'' as output\" queryout \"" + #[User::strFileName] + "\" -T -c -t '"
Here #[User::strFileName] is the dynamic file you want to generate.
Then in the Execute SQL Task, change SQLSourceType to variable, and select the variable you just generated.
I have been playing around with database backup automation scripts and in particular the one at this link:
http://support.microsoft.com/kb/2019698
I got everything working fine and even added automated compression using 7zip, logging, and with the help of vbscript an email scheduled notification. However, even without all that, you can see this is a bit heavy. Its now easily reaching 400 lines of code.
I am really not comfortable having all my stuff in one block like this and I want to separate it out. So I can have say a compression file called BackupCompress.sql, and an log file called BackupLogReport.sql all of which would be called from inside the main Backup.sql script.
The Backup.sql script is in turn run from a Backup.bat file which is set to run in the scheduler.
All of this works like a charm. But I am at a loss as to how to call BackupCompress.sql from within BackupLogReport.sql and pass in parameters and get a return value.
In the Backup.bat file I use this command to spin everything up and pass parameters to it:
SQLCMD -S %SQLDATABASE% -d master -i %BACKUP_FOLDER%\Backup.sql -v Pram1="%Pram1%"
In the Backup.sql file I get those parameters simply by:
DECLARE #Param1 NVARCHAR(256) = '$(Param)'
from then on as my script runs it uses whatever I want to pass in.
I tried using standard sql stored procedure logic to call another procedure like this:
EXEC BackupCompress.sql
#AnotherParam = #Param1
I also tried:
EXECUTE sp_executesql BackupCompress.sql #Param1
Finally I tried:
SET #cmd = 'SQLCMD -S ' + ##ServerName + ' -d master -i $(BACKUP_FOLDER)\BackupCompress.sql -v Param1 = ' + #Param1
EXEC xp_cmdshell #cmd, no_output
but it doesn't work and my files which were being compressed simply don't get compressed. I get no error message. everything else continues to work fine.
EDIT: I was getting an error message on the last one but I fixed it - however, I still don't get my little zip file. I even put print's into the file to see if it was actually be executed but it does not seem to be.
EDIT2: Another option I have tried, almost works, but cant figure out how to pass parameters from within the sql file to the other file... As a result it generates an error saying it cant find the file as it's treating the path as a literal string instead of the variable value I want to pass.
:!!SQLCMD -S ##ServerName -d master -i #CFG_BACKUP_PATH\BackupCompress.sql -v Param1 = #Param1
xp_cmdshell can return values. These values can be captured into a table variable that you could use to "see" the results, and perhaps determine where the problem lies:
DECLARE #cmd VARCHAR(255);
DECLARE #Param1 NVARCHAR(256) = '$(Param)';
DECLARE #Results TABLE
(
ResultsText NVARCHAR(MAX)
);
SET #cmd = 'SQLCMD -S ' + ##ServerName + '-d master -i $(BACKUP_FOLDER)\$(BackupCompress.sql) -v Param1 = ' + #Param1;
SET #cmd = 'DIR \';
INSERT INTO #Results (ResultsText)
EXEC xp_cmdshell #cmd;
SELECT *
FROM #Results;
You need to ensure xp_cmdshell is enabled for the instance, by executing:
EXEC sp_configure 'xp_cmdshell',1;
I have 10 transact SQL scripts that each create a table and fill it with data.
I am attempting to create 1 master sql script that will run each of the 10 other scripts.
Is there a way with TSQL / TRANSACTSQL for Microsoft SQL Server 2008 to execute another tsql script from within the current tsql script?
This is intended to be run through the SQL Server Management Studio (SSMS).
Thanks!
Try this if you are trying to execute a .sql file in SSMS:
:r C:\Scripts\Script1.sql
:r C:\Scripts\Script2.sql
:r C:\Scripts\Script3.sql
...
note: for this to run turn on sql command mode (Query > SQLCMD Mode)
If these are scripts you run fairly often you might consider dropping them in a stored proc and running them that way...
You can also do it through sqlcmd (which I believe is more common):
sqlcmd -S serverName\instanceName -i C:\Scripts\Script1.sql
Or just use openrowset to read your script into a variable and execute it:
DECLARE #SQL varchar(MAX)
SELECT #SQL = BulkColumn
FROM OPENROWSET
( BULK 'MeinPfad\MeinSkript.sql'
, SINGLE_BLOB ) AS MYTABLE
--PRINT #sql
EXEC (#sql)
I find it useful to define a variable with the path, if I want to execute a set of scripts, say to run a test, something like:
:setvar path "C:\code\branch-qa"
:r $(path)\tables\client.sql
:r $(path)\tables\item.sql
:r $(path)\proc\clientreport.sql
exec clientreport
You can use osql or better yet the newer sqlcmd almost interchangeably. I am using osql in this example only because I happened to have a code sample sitting around but in production I am using sqlcmd. Here is a snipped of code out of a larger procedure I use to run update scripts against databases. They are ordered by major, minor, release, build as I name my scripts using that convention to track releases. You are obviously missing all of my error handing, the parts where I pull available scripts from the database, setup variables, etc but you may still find this snippet useful.
The main part I like about using osql or sqlcmd is that you can run this code in ssms, or in a stored procedure (called on a scheduled basis maybe) or from a batch file. Very flexible.
--Use cursor to run upgrade scripts
DECLARE OSQL_cursor CURSOR
READ_ONLY
FOR SELECT FileName
FROM #Scripts
ORDER BY Major, Minor, Release, Build
OPEN OSQL_cursor
FETCH NEXT FROM OSQL_cursor INTO #name
WHILE (##fetch_status <> -1)
BEGIN
IF ((##fetch_status <> -2) AND (#result = 0))
BEGIN
SET #CommandString = 'osql -S ' + ##ServerName + ' -E -n -b -d ' + #DbName + ' -i "' + #Dir + #name + '"'
EXEC #result = master.dbo.xp_cmdshell #CommandString, NO_OUTPUT
IF (#result = 0)
BEGIN
SET #Seconds = DATEDIFF(s, #LastTime, GETDATE())
SET #Minutes = #Seconds / 60
SET #Seconds = #Seconds - (#Minutes * 60)
PRINT 'Successfully applied ' + #name + ' in ' + cast(#Minutes as varchar)
+ ' minutes ' + cast(#Seconds as varchar) + ' seconds.'
SET #LastTime = GETDATE()
END
ELSE
BEGIN
SET #errMessage = 'Error applying ' + #name + '! The database is in an unknown state and the schema may not match the version.'
SET #errMessage = #errMessage + char(13) + 'To find the error restore the database to version ' + #StartingVersion
SET #errMessage = #errMessage + ', set #UpToVersion = the last version successfully applied, then run ' + #name
SET #errMessage = #errMessage + ' manually in Query Analyzer.'
END
IF #name = (#UpToVersion + '.sql')
GOTO CleanUpCursor --Quit if the final script specified has been run.
END
FETCH ENDT FROM OSQL_cursor INTO #name
END
The simplest way would be to make your scripts stored procedures, and to call (via the EXECUTE command) each procedure in turn from a central procedure. This is ideal if you're going to run the exact same script(s) over and over again (or the same script with different parameters passed in).
If your scripts are .sql (or any kind of text) file, as #Abe Miesller says (upvoted) you can run them from within SSMS via the :r command, when SQLCMD mode is enabled. You would have to know and script the exact file path and name. This cannot be done from within a stored procedure.
A last alternative, usable with "known" file names and necessary for arbitrary file names (say, all files currently loaded in a subfolder) is to leverage the power of extended procedure XP_CMDSHELL. Such solutions can get compelx pretty fast (use it to retrieve list of files, build and execute via xp_cmdshell a string calling SQLCMD for each file in turn, manage results and errors via output files, it goes on and on) so I'd only do this as a last resort.
Assuming you want to keep the 10 scripts in their own individual files, I would say the easiest way to do what you want would be to create a batch file that executes osql.exe to execute the 10 scripts in the order you want.