How to supply SQL-CMD variables to SQL Agent Job? - sql

I have created a Job in my SQL instance.
Consider following as the SQL-CMD scripts set to run while executing the step:
:setvar DatabaseName "BillingDatabase"
:setvar ReportingDatabaseName "BillingDatabase_Reporting"
BEGIN TRANSACTION
SET NOCOUNT ON
USE [$(NzeurReportingDatabaseName)]......
COMMIT TRANSACTION
this job is set to run on various environments where the database names are expected to be different.
For example, production environment may have something like -
"[CompanyName].Billing.Database", and "[CompanyName].Billing.ReportingDatabase"
How can I configure this SQL job to supply these CMD variables depending upon the environment where the job is created.
This is because, our deployment process is fairly automated, and we don't want to edit the variables manually in SQL Job steps once the job is created.
Any idea of how to achieve this?

I couldn't find any direct solution of fixing this.
But wrote PowerShell scripts to replace SQL-CMD variables in the script file:
function replaceCmdletParameterValueInFile( $file, $key, $value ) {
$content = Get-Content $file
if ( $content -match ":setvar\s*$key\s*[\',\""][\w\d\.\:\\\-]*[\'\""_]" ) {
$content -replace ":setvar\s*$key\s*[\',\""][\w\d\.\:\\\-]*[\'\""_]", ":setvar $key $value" |
Set-Content $file
} else {
Add-Content $file ":setvar $key $value"
}
}
During the deployment, I call this function to replace database name before Invoke-SqlCmd.
replaceCmdletParameterValueInFile $scriptfile "DatabaseName" "`"$MyDatabaseName`""

Related

How to return multiple recordsets from stored procedure using PowerShell

I need to run a stored procedure that return 2 result sets with PowerShell. I use dbatools to do so but I could use .NET to get there. I just don't know how.
For this example, I use exec sp_spaceused that will return the space used in the actual database. Here's the result in SSMS:
As you can see here, there are 2 result sets. Now when I run the same command in PowerShell, I can't figure how to get the next result set.
Here is the code I've come up with:
$conn = Connect-DbaInstance -SqlInstance . -MultipleActiveResultSets
$query = 'exec sp_spaceused'
Invoke-DbaQuery -SqlInstance $conn -Query $query
I'm not even sure if I used MultipleActiveResultSets in the right way. I can't find any good example anywhere.
Wow, I just found the answer by testing all the different -As options. Here's the code:
$conn = Connect-DbaInstance -SqlInstance . -Database 'StackOverFlow'
$query = 'exec sp_spaceused'
$ds = Invoke-DbaQuery -SqlInstance $conn -Query $query -As DataSet
foreach ($table in $ds.Tables) {
$table | Out-String
}
I use Out-String to avoid joining objet but you could use Out-GridView. I also realize that I don't need to use -MultipleActiveResultSets.

Run .sql file containing PL/SQL in PowerShell

I would like to find a way to run .sql file containing PL/SQL in PowerShell using .NET Data Proider for Oracle (System.Data.OracleClient).
I would deffinitely avoid using sqlplus for this task.
This is where I am now
add-type -AssemblyName System.Data.OracleClient
function Execute-OracleSQL
{
Param
(
# UserName required to login
[string]
$UserName,
# Password required to login
[string]
$Password,
# DataSource (This is the TNSNAME of the Oracle connection)
[string]
$DataSource,
# SQL File to execute.
[string]
$File
)
Begin
{
}
Process
{
$FileLines = Get-Content $File
$crlf = [System.Environment]::NewLine
$Statement = [string]::Join($crlf,$FileLines)
$connection_string = "User Id=$UserName;Password=$Password;Data Source=$DataSource"
try{
$con = New-Object System.Data.OracleClient.OracleConnection($connection_string)
$con.Open()
$cmd = $con.CreateCommand()
$cmd.CommandText = $Statement
$cmd.ExecuteNonQuery();
} catch {
Write-Error (“Database Exception: {0}`n{1}” -f $con.ConnectionString, $_.Exception.ToString())
stop-transcript
exit 1
} finally{
if ($con.State -eq ‘Open’) { $con.close() }
}
}
End
{
}
}
but I keep getting following error message
"ORA-00933: SQL command not properly ended
The content of the file is pretty basic:
DROP TABLE <schema name>.<table name>;
create table <schema name>.<table name>
(
seqtuninglog NUMBER,
sta number,
msg varchar2(1000),
usrupd varchar2(20),
datupd date
);
The file does not contain PL/SQL. It contains two SQL statements, with a semicolon statement separator between (and another one at the end, which you've said you've removed).
You call ExecuteNonQuery with the contents of that file, but that can only execute a single statement, not two at once.
You have a few options. Off the top of my head and in no particular order:
a) split the statements into separate files, and have your script read and process them in the right order;
b) keep them in one file and have your script split that into multiple statements, based on the separating semicolon - which is a bit messy and gets nasty if you will actually have PL/SQL at some point, since that has semicolons with one 'statement' block, unless you change everything to use /;
c) wrap the statements in an anonymous PL/SQL in the file, but as you're using DDL (drop/create) those would also then have to change to dynamic SQL;
d) have your script wrap the file contents in an anonymous PL/SQL block, but then that would have to work out if there is DDL and make that dynamic on the fly;
e) find a library to deal with the statement manipulation so you don't have to work out all the edge cases and combinations (no idea if such a thing exists);
f) use SQL*Plus or SQLcl, which you said you want to avoid.
There may be other options but they all have pros and cons.

Using PS to query SQL for list of users, then disable Active Directory accounts?

I'm trying to use Powershell to query SQL database for a list of suspended users, pipe into a variable, then use that to loop through and disable those AD accounts. Here's the code I'm using... note I'm just trying to write the output now instead of making a change so I don't do anything I regret.
Import-Module ActiveDirectory
$Users = Invoke-Sqlcmd -ServerInstance 'SERVER' -Database 'NAME' -Query "SELECT EmployeeID,
EmployeeStatus FROM [NAME].[dbo].[employee] WHERE EmployeeStatus = 'S'"
foreach ($user in $users)
{
Get-ADUser -Filter "EmployeeID -eq '$($user.EmployeeID)'" `
-SearchBase "OU=Logins,DC=domain,DC=com" |
#Set-ADUser -Identity $Name -Enabled $False
Write-Verbose $User
}
The SQL query is working fine, but when I run the loop it's giving this error:
Write-Verbose : The input object cannot be bound to any parameters for
the command either because the
command does not take pipeline input or the input and its properties do not match any of the parameters that take pipeline
input.
Am I just formatting this incorrectly? Or is there another way I should be thinking of this?
Thanks in advance!
If you would like to find inactive user accounts in Active Directory, you can use the Search-ADAccount cmdlet. You need to do this use the “-AccountInActive” parameter with Search-ADAccount.
PowerShell command below:
Search-ADAccount –AccountInActive –TimeSpan 120:00:00:00 –ResultPageSize 2000 –ResultSetSize $null | ?{$_.Enabled –eq $True} | Select-Object Name, SamAccountName, DistinguishedName | Export-CSV “C:\Temp\InActiveADUsers.CSV” –NoTypeInformation
I have given timespan for 120days and export the list into csv file.

Generate data seeding script using PowerShell and SSMS

Here I found a solution for the manual creation of the data seeding script. The manual solution allows me to select for which tables I want to generate the inserts
I would like to know if there is an option to run the same process via PowerShell?
So far I have managed how to create a SQL script which creates the Database schema seeder:
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
$s = new-object ('Microsoft.SqlServer.Management.Smo.Server') "(localdb)\mssqlLocalDb"
$dbs=$s.Databases
#$dbs["HdaRoot"].Script()
$dbs["HdaRoot"].Script() | Out-File C:\sql-seeding\HdaRoot.sql
#Generate script for all tables
foreach ($tables in $dbs["HdaRoot"].Tables)
{
$tables.Script() + "`r GO `r " | out-File C:\sql-seeding\HdaRoot.sql -Append
}
however is there any similar way to generate the data seeding script?
Any ideas? Cheers
You can use the SMO scripter class. This will allow you to script the table creates as well as INSERT statements for the data within the tables.
In my example I'm directly targeting TempDB and defining an array of table names I want to script out rather than scripting out every table.
Scripter has a lot of options available, so I've only done a handful in this example - the important one for this task is Options.ScriptData. Without it you'll just get the schema scripts that you're already getting.
The EnumScript method at the end does the actual work of generating the scripts, outputting, and appending the script to the file designated in the options.
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
## target file
$outfile = 'f:\scriptOutput.sql'
## target server
$s = new-object ('Microsoft.SqlServer.Management.Smo.Server') "localhost"
## target database
$db = $s.databases['tempdb']
## array of tables that we want to check
$tables = #('Client','mytable','tablesHolding')
## new Scripter object
$tableScripter = new-object ('Microsoft.SqlServer.Management.Smo.Scripter')($s)
##define options for the scripter
$tableScripter.Options.AppendToFile = $True
$tableScripter.Options.AllowSystemObjects = $False
$tableScripter.Options.ClusteredIndexes = $True
$tableScripter.Options.Indexes = $True
$tableScripter.Options.ScriptData = $True
$tableScripter.Options.ToFileOnly = $True
$tableScripter.Options.filename = $outfile
## build out the script for each table we defined earlier
foreach ($table in $tables)
{
$tableScripter.enumscript(#($db.tables[$table])) #enumscript expects an array. this is ugly, but it gives it what it wants.
}

Cronjob does not execute command line in perl script

I am unfamiliar with linux/linux environment so do pardon me if I make any mistakes, do comment to clarify.
I have created a simple perl script. This script creates a sql file and as shown, it would execute the lines in the file to be inserted into the database.
#!/usr/bin/perl
use strict;
use warnings;
use POSIX 'strftime';
my $SQL_COMMAND;
my $HOST = "i";
my $USERNAME = "need";
my $PASSWORD = "help";
my $NOW_TIMESTAMP = strftime '%Y-%m-%d_%H-%M-%S', localtime;
open my $out_fh, '>>', "$NOW_TIMESTAMP.sql" or die 'Unable to create sql file';
printf {$out_fh} "INSERT INTO BOL_LOCK.test(name) VALUES ('wow');";
sub insert()
{
my $SQL_COMMAND = "mysql -u $USERNAME -p'$PASSWORD' ";
while( my $sql_file = glob '*.sql' )
{
my $status = system ( "$SQL_COMMAND < $sql_file" );
if ( $status == 0 )
{
print "pass";
}
else
{
print "fail";
}
}
}
insert();
This works if I execute it while I am logged in as a user(I do not have access to Admin). However, when I set a cronjob to run this file let's say at 10.08am by using the line(in crontab -e):
08 10 * * * perl /opt/lampp/htdocs/otpms/Data_Tsunami/scripts/test.pl > /dev/null 2>&1
I know the script is being executed as the sql file is created. However no new rows are inserted into the database after 10.08am. I've searched for solutions and some have suggested using the DBI module but it's not available on the server.
EDIT: Didn't manage to solve it in the end. A root/admin account was used to to execute the script so that "solved" the problem.
First things first, get rid of the > /dev/null 2>&1 at the end of your crontab entry (at least temporarily) so you can actually see any errors that may be occurring.
In other words, change it temporarily to something like:
08 10 * * * perl /opt/lampp/htdocs/otpms/Data_Tsunami/scripts/test.pl >/tmp/myfile 2>&1
Then you can examine the /tmp/myfile file to see what's being output.
The most likely case is that mysql is not actually on the path in your cron job, because cron itself gives a rather minimal environment.
To fix that problem (assuming that's what it is), see this answer, which gives some guidelines on how best to expand the cron environment to give you what you need. That will probably just involve adding the MySQL executable directory to your PATH variable.
The other thing you may want to consider is closing the out_fh file before trying to pass it to mysql - if the buffers haven't been flushed, it may still be an empty file as far as other processes are concerned.
The expression glob(".* *") matches all files in the current working
directory.
- http://perldoc.perl.org/functions/glob.html
you should not rely on the wd in a cron job. If you want to use a glob (or any file operation) with a relative path, set the wd with chdir first.
source: http://www.perlmonks.org/bare/?node_id=395387
So if your working directory is, for example /home/user, you should insert
chdir('/home/user/');
before the WHILE, ie:
sub insert()
{
my $SQL_COMMAND = "mysql -u $USERNAME -p'$PASSWORD' ";
chdir('/home/user/');
while( my $sql_file = glob '*.sql' )
{
...
replace /home/user with wherever your sql files are being created.
It's better to do as much processing within Perl as possible. It avoids the overhead of generating a separate shell process and leaves everything under the control of the program so that you can handle any errors much more simply
Database access from Perl is done using the DBI module. This program demonstrates how to achieve what you have written using the mysql utility. As you can see it's also much more concise
#!/usr/bin/perl
use strict;
use warnings;
use DBI;
my $host = "i";
my $username = "need";
my $password = "help";
my $dbh = DBI->connect("DBI:mysql:database=test;host=$host", $username, $password);
my $insert = $dbh->prepare('INSERT INTO BOL_LOCK.test(name) VALUES (?)');
my $rv = $insert->execute('wow');
print $rv ? "pass\n" : "fail\n";