I am facing very big issue, please help me. I am having "FileName.sql" file , in that file when I try to deploy to server using PowerShell script (that's our company deployment tool), it always complains that there is a special character inside the file, hence deployment fails.
But it never says on which line the special character is, could you please help me, how to identify the special character in SQL file?
When I execute this FileName.sql file manually in SQL Server Management Studio, it will execute without error, but through PowerShell script, I am not able to deploy, so please help me.
How to identify the special character inside my SQL file?
select * from TableName
select REPLACE(REPLACE(LTRIM(RTRIM([ColumnName])), CHAR(1), ''), CHAR(9), '')) [NewColumnName] from TableName
I have used REPLACE() function to remove the special charachers:-
LTRIM() & RTRIM() - Tab character or the Space bar character
CHAR(1) & CHAR(9) - ASCII Control Characters
If you are going to use this on multiple columns. You can create a Function. Let me know if you need help with creating a Function. Thank you
open file with notepad++. find the char and save/encode as ascii/utf8. possibly someone edited the file in Word and you have special/fancy quotes or hyphen.
No clue if this helps, but this may...
Find-FancyCharacters.ps1
[CmdletBinding()]
param (
[string[]]
$Files
)
$badCharacters = #"
‘
’
–
"#
$badCharactersList = $badCharacters -split "`r`n"
if (-not $Files)
{
$Files = #('D:\test\badchars.txt')
}
foreach ($file in $Files)
{
Write-Progress -Activity "Checking file: $file..."
$contents = Get-Content $file -Encoding UTF8
$lineNumber = 1
foreach ($line in $contents)
{
foreach ($invalidChar in $badCharactersList)
{
if ($line -match $invalidChar)
{
Write-Output "Invalid Character found: f=$file, l=$lineNumber, c=$invalidChar"
break
}
}
$lineNumber++
}
}
Related
Hi Glorius People of the Interwebz!
I come to you with a humble question (please go easy on me, I am fairly OK in PowerShell, but my SQL skills are minimal... :( )
So I have been tasked with to write a powershell script to import data (from a number of csv files to a database) and I made a good progress, based on this (I heavily modified my version). All works dashingly, except one part: when I try to insert the values (I created a sort of "mapping file" to map the csv headers to the data), I can't seem to use the created string in the values part. So here is what I have:
This is my current code for powershell (ignore the comments)
This is a sample data csv
This is my mapping file
What I would want, is to replace the
VALUES(
'$($CSVLine.Invoice_Status_Text)',
'$($CSVLine.Invoice_Status)',
'$($CSVLine.Dispute_Required_Text)',
'$($CSVLine.Dispute_Required)',
'$($CSVLine.Dispute_Resolved_Text)',
'$($CSVLine.Dispute_Resolved)',
'$($CSVLine.Sub_Account_Number)',
'$($CSVLine.QTY)',
'$($CSVLine.Date_of_Service)',
'$($CSVLine.Service)',
'$($CSVLine.Amount_of_Service)',
'$($CSVLine.Total)',
'$($CSVLine.Location)',
'$($CSVLine.Dispute_Reason_Text)',
'$($CSVLine.Dispute_Reason)',
'$($CSVLine.Numeric_counter)'
);"
Part, for example with a string generated this way:
But when I replace the long - and honestly, boring to type - values with the $valueString, I get this type of error:
Incorrect syntax was encountered while parsing '$($'.
Not sure, if it matters, but my PS is 7.1
Any good people who can give a good suggestion on how to build the values from my text file...?
Ta,
F.
As commented, wrapping variables inside single-quotes takes the variable as written literally, so you do not get the value contained (7957), but a string like $($CSVLine.Numeric_counter) instead.
I don't do SQL a lot, but I think I would change the part where you construct the values to insert like this:
# demo, read the csv file in your example
$csv = Import-Csv D:\Test\test.csv -Delimiter ';'
# demo, these are the headers (or better yet, the Property Names to use from the objects in the CSV) as ARRAY
# (you use `$headers = Get-Content -Path 'C:\Temp\SQL\ImportingCSVsIntoSQLv1\config\headers.txt'`)
$headers = 'Invoice_Status_Text','Invoice_Status','Dispute_Required_Text','Dispute_Required',
'Dispute_Resolved_Text','Dispute_Resolved','Sub_Account_Number','QTY','Date_of_Service',
'Service','Amount_of_Service','Total','Location','Dispute_Reason_Text','Dispute_Reason','Numeric_counter'
# capture formatted blocks of values for each row in the CSV
$AllValueStrings = $csv | ForEach-Object {
# get a list of values using propertynames you have in the $headers
$values = foreach ($propertyName in $headers) {
$value = $_.$propertyName
# output the VALUE to be captured in $values
# for SQL, single-quote the string type values. Numeric values without quotes
if ($value -match '^[\d\.]+$') { $value }
else { "'{0}'" -f $value }
}
# output the values for this row in the CSV
$values -join ",`r`n"
}
# $AllValueStrings will now have as many formatted values to use
# in the SQL as there are records (rows) in the csv
$AllValueStrings
Using your examples, $AllValueStrings would yield
'Ready To Pay',
1,
'No',
2,
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
7957
I would like to find a way to run .sql file containing PL/SQL in PowerShell using .NET Data Proider for Oracle (System.Data.OracleClient).
I would deffinitely avoid using sqlplus for this task.
This is where I am now
add-type -AssemblyName System.Data.OracleClient
function Execute-OracleSQL
{
Param
(
# UserName required to login
[string]
$UserName,
# Password required to login
[string]
$Password,
# DataSource (This is the TNSNAME of the Oracle connection)
[string]
$DataSource,
# SQL File to execute.
[string]
$File
)
Begin
{
}
Process
{
$FileLines = Get-Content $File
$crlf = [System.Environment]::NewLine
$Statement = [string]::Join($crlf,$FileLines)
$connection_string = "User Id=$UserName;Password=$Password;Data Source=$DataSource"
try{
$con = New-Object System.Data.OracleClient.OracleConnection($connection_string)
$con.Open()
$cmd = $con.CreateCommand()
$cmd.CommandText = $Statement
$cmd.ExecuteNonQuery();
} catch {
Write-Error (“Database Exception: {0}`n{1}” -f $con.ConnectionString, $_.Exception.ToString())
stop-transcript
exit 1
} finally{
if ($con.State -eq ‘Open’) { $con.close() }
}
}
End
{
}
}
but I keep getting following error message
"ORA-00933: SQL command not properly ended
The content of the file is pretty basic:
DROP TABLE <schema name>.<table name>;
create table <schema name>.<table name>
(
seqtuninglog NUMBER,
sta number,
msg varchar2(1000),
usrupd varchar2(20),
datupd date
);
The file does not contain PL/SQL. It contains two SQL statements, with a semicolon statement separator between (and another one at the end, which you've said you've removed).
You call ExecuteNonQuery with the contents of that file, but that can only execute a single statement, not two at once.
You have a few options. Off the top of my head and in no particular order:
a) split the statements into separate files, and have your script read and process them in the right order;
b) keep them in one file and have your script split that into multiple statements, based on the separating semicolon - which is a bit messy and gets nasty if you will actually have PL/SQL at some point, since that has semicolons with one 'statement' block, unless you change everything to use /;
c) wrap the statements in an anonymous PL/SQL in the file, but as you're using DDL (drop/create) those would also then have to change to dynamic SQL;
d) have your script wrap the file contents in an anonymous PL/SQL block, but then that would have to work out if there is DDL and make that dynamic on the fly;
e) find a library to deal with the statement manipulation so you don't have to work out all the edge cases and combinations (no idea if such a thing exists);
f) use SQL*Plus or SQLcl, which you said you want to avoid.
There may be other options but they all have pros and cons.
I have 4 files with the same csv header as following
Column1,Column2,Column3,Column4
But I only required data from Column2,Column3,Column4 for import the data into SQL database using BCP . I am using the PowerShell to select the columns that I want and import the required data using BCP but my powershell executed with no error and there are not data updated in my database table. May I know how to set the BCP to import the output from Powershell to database table. Here are my powershell script
$filePath = Get-ChildItem -Path 'D:\test\*' -Include $filename
$desiredColumn = 'Column2','Column3','Column4'
foreach($file in $filePath)
{
write-host $file
$test = import-csv $file | select $desiredColumn
write-host $test
$action = bcp <myDatabaseTableName> in $test -T -c -t";" -r"\n" -F2 -S <MyDatabase>
}
These are the output from the powershell script
D:\test\sample1.csv
#{column2=111;column3=222;column4=333} #{column2=444;column3=555;column4=666}
D:\test\sample2.csv
#{column2=777;column3=888;column4=999} #{column2=aaa;column3=bbb;column4=ccc}
First off, you can't update a table with bcp. It is used to bulk load data. That is, it will either insert new rows or export existing data into a flat file. Changing existing rows, usually called as updating, is out of scope for bcp. If that's what you need, you need to use another a tool. Sqlcmd works fine, and Powershell's got Invoke-Sqlcmd for running arbitary TSQL statements.
Anyway, the BCP utility has notoriously tricky syntax. As far as I know, one cannot bulk load data by passing the data as parameter to bcp, a source file must be used. Thus you need to save the filtered file and pass its name to bcp.
Exporting a filtered CSV is easy enough, just remember to use -NoTypeInformation switch, lest you'll get #TYPE Selected.System.Management.Automation.PSCustomObject as your first row of data. Assuming the bcp arguments are well and good (why -F2 though? And Unix newlines?).
Stripping double quotes requires another an edit to the file. Scrpting Guy has a solution.
foreach($file in $filePath){
write-host $file
$test = import-csv $file | select $desiredColumn
# Overwrite filtereddata.csv, should one exist, with filtered data
$test | export-csv -path .\filtereddata.csv -NoTypeInformation
# Remove doulbe quotes
(gc filtereddata.csv) | % {$_ -replace '"', ''} | out-file filtereddata.csv -Fo -En ascii
$action = bcp <myDatabaseTableName> in filtereddata.csv -T -c -t";" -r"\n" -F2 -S <MyDatabase>
}
Depending on your locale, column separator might be semicolon, colon or something else. Use -Delimiter '<character>' switch to pass whatever you need or change bcp's argument.
Erland's got a helpful page about bulk operations. Also, see Redgate's advice.
Without need to modify the file first, there is an answer here about how bcp can handle quoted data.
BCP in with quoted fields in source file
Essentially, you need to use the -f option and create/use a format file to tell SQL your custom field delimiter (in short, it is no longer a lone comma (,) but it is now (",")... comma with two double quotes. Need to escape the dblquotes and a small trick to handle the first doulbe quote on a line. But it works like a charm.
Also, need the format file to ignore column(s)... just set the destination column number to zero. All with no need to modify the file before load. Good luck!
I have continuous automated application deployment building on Azure DevOps server 2019(ex TFS). Part of deploy is checking the Oracle DB status, before running scripts, the script below works for a year, and ones (probably after latest Azure DevOps server update 2019.1) it stops working with an error:
SQL> SP2-0734: unknown command beginning " select..." - rest of line ignored.
SQL> SP2-0734: unknown command beginning " select..." - rest of line ignored.
$chekdbsql = 'select status from v$instance;'
$i = 1
$chkdb = ""
while ($chkdb.Contains("OPEN") -ne 'True') {
Clear-Variable -Name chkdb
$chkdb = ($chekdbsql | cmd /c "sqlplus -s user/password#localhost/ora as sysdba")
if ($chkdb.Contains("OPEN") -eq 'True'){
break
}
echo "Trying to connect to database. Attempt $i"
sleep 10
$i++ }
write-host "Connected! Database's status is 'open'." -ForegroundColor green
If I tried to execute command locally on the machine where the application is built - it's work well.
The space before select makes me think it's a character encoding issue. See e.g. this, this
beginning " select..."
I'm not familiar enough with powershell to know what the problem is. I can think of a workaround, but it's a bit of a hack.
$chekdbsql = "`nselect status from v`$instance;"
This makes sure that whatever garbage characters are getting inserted at the beginning of the string will be on their own line in SQL*Plus. So if you get a SP2-0734, your select command will still run after that. Since it's now a double-quoted string, I escaped the $.
I need to change a string in multiple text files
I have written the script below in ACCESS VBA but the error is TYPE MISMATCH
Dim str As String
str = "N=maher"
Call Shell("c:\windows\system32\powershell.exe" - Command("get-content -Path e:\temptest.txt") - Replace(str, "maher", "ali"))
The syntax for calling PowerShell is way off. Suggestion: get it working from the command line yourself first, and then run from Access (an odd choice: it just makes this more complicated).
A PowerShell script to do this (.ps1 file) would need to contain something like:
Get-Content -Path "E:\temptest.txt" | ForEach-Object { $_ -Replace 'maher', 'ali' } | do-something-with-the-updated-content
You need to define:
What you are replacing (you pass N=maher in but then hard code two strings for Replace.
What do to with the strings after doing the replacement (Get-Content only reads files).