Files with .sql extension identified as binary in Mercurial [duplicate] - sql

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why does Mercurial think my SQL files are binary?
I generated a complete set of scripts for the stored procedures in a database. When I created a Mercurial repository and added these files they were all added as binary. Obviously, I still get the benefits of versioning, but lose a lot of efficiency, 'diff'ing, etc... of text files. I verified that these files are indeed all just text.
Why is it doing this?
What can I do to avoid it?
IS there a way to get Hg to change it mind about these files?
Here is a snippet of changeset log:
496.1 Binary file SQL/SfiData/Stored Procedures/dbo.pFindCustomerByMatchCode.StoredProcedure.sql has changed
497.1 Binary file SQL/SfiData/Stored Procedures/dbo.pFindUnreconcilableChecks.StoredProcedure.sql has changed
498.1 Binary file SQL/SfiData/Stored Procedures/dbo.pFixBadLabelSelected.StoredProcedure.sql has changed
499.1 Binary file SQL/SfiData/Stored Procedures/dbo.pFixCCOPL.StoredProcedure.sql has changed
500.1 Binary file SQL/SfiData/Stored Procedures/dbo.pFixCCOrderMoneyError.StoredProcedure.sql has changed
Thanks in advance for your help
Jim

In fitting with Mercurial's views on binary files, it does not actually track file types, which means that there is no way for a user to mark a file as binary or not binary.
As tonfa and Rudi mentioned, Mercurial determines whether a file is binary or not by seeing if there is a NUL byte anywhere in the file. In the case of UTF-[16|32] files, a NUL byte is pretty much guaranteed.
To "fix" this, you would have to ensure that the files are encoded with UTF-8 instead of UTF-16. Ideally, your database would have a setting for Unicode encoding when doing the export. If that's not the case, another option would be to write a precommit hook to do it (see How to convert a file to UTF-8 in Python for a start), but you would have to be very careful about which files you were converting.

I know it's a bit late, but I was evaluating Kiln and came across this problem. After discussion with the guys at Fogbugz who couldn't give me an answer other than "File/Save As" from SSMS for every *.sql file (very tedious), I decided to have a look at writing a quick script to convert the *.sql files.
Fortunately you can use one Microsoft technology (Powershell) to (sort of) overcome an issue with another Microsoft technology (SSMS) - using Powershell, change to the directory that contains your *.sql files and then copy and paste the following into the Powershell shell (or save as a .ps1 script and run it from Powershell - make sure to run the command "Set-ExecutionPolicy RemoteSigned" before trying to run a .ps1 script):
function Get-FileEncoding
{
[CmdletBinding()] Param (
[Parameter(Mandatory = $True, ValueFromPipelineByPropertyName = $True)] [string]$Path
)
[byte[]]$byte = get-content -Encoding byte -ReadCount 4 -TotalCount 4 -Path $Path
if ( $byte[0] -eq 0xef -and $byte[1] -eq 0xbb -and $byte[2] -eq 0xbf )
{ Write-Output 'UTF8' }
elseif ($byte[0] -eq 0xfe -and $byte[1] -eq 0xff)
{ Write-Output 'Unicode' }
elseif ($byte[0] -eq 0xff -and $byte[1] -eq 0xfe)
{ Write-Output 'Unicode' }
elseif ($byte[0] -eq 0 -and $byte[1] -eq 0 -and $byte[2] -eq 0xfe -and $byte[3] -eq 0xff)
{ Write-Output 'UTF32' }
elseif ($byte[0] -eq 0x2b -and $byte[1] -eq 0x2f -and $byte[2] -eq 0x76)
{ Write-Output 'UTF7'}
else
{ Write-Output 'ASCII' }
}
$files = get-ChildItem "*.sql"
foreach ( $file in $files )
{
$encoding = Get-FileEncoding $file
If ($encoding -eq 'Unicode')
{
(Get-Content "$file" -Encoding Unicode) | Set-Content -Encoding UTF8 "$file"
}
}
The function Get-FileEncoding is courtesy of http://poshcode.org/3227 although I had to modify it slightly to cater for UC2 little endian files which SSMS seems to have saved these as. I would recommend backing up your files first as it overwrites the original - you could, of course, modify the script so that it saves a UTF-8 version of the file instead e.g. change the last line of code to say:
(Get-Content "$file" -Encoding Unicode) | Set-Content -Encoding UTF8 "$file.new"
The script should be easy to modify to traverse subdirectories as well.
Now you just need to remember to run this if there are any new *.sql files, before you commit and push your changes. Any files already converted and subsequently opened in SSMS will stay as UTF-8 when saved.

Related

BCP update database table base on output from powershell

I have 4 files with the same csv header as following
Column1,Column2,Column3,Column4
But I only required data from Column2,Column3,Column4 for import the data into SQL database using BCP . I am using the PowerShell to select the columns that I want and import the required data using BCP but my powershell executed with no error and there are not data updated in my database table. May I know how to set the BCP to import the output from Powershell to database table. Here are my powershell script
$filePath = Get-ChildItem -Path 'D:\test\*' -Include $filename
$desiredColumn = 'Column2','Column3','Column4'
foreach($file in $filePath)
{
write-host $file
$test = import-csv $file | select $desiredColumn
write-host $test
$action = bcp <myDatabaseTableName> in $test -T -c -t";" -r"\n" -F2 -S <MyDatabase>
}
These are the output from the powershell script
D:\test\sample1.csv
#{column2=111;column3=222;column4=333} #{column2=444;column3=555;column4=666}
D:\test\sample2.csv
#{column2=777;column3=888;column4=999} #{column2=aaa;column3=bbb;column4=ccc}
First off, you can't update a table with bcp. It is used to bulk load data. That is, it will either insert new rows or export existing data into a flat file. Changing existing rows, usually called as updating, is out of scope for bcp. If that's what you need, you need to use another a tool. Sqlcmd works fine, and Powershell's got Invoke-Sqlcmd for running arbitary TSQL statements.
Anyway, the BCP utility has notoriously tricky syntax. As far as I know, one cannot bulk load data by passing the data as parameter to bcp, a source file must be used. Thus you need to save the filtered file and pass its name to bcp.
Exporting a filtered CSV is easy enough, just remember to use -NoTypeInformation switch, lest you'll get #TYPE Selected.System.Management.Automation.PSCustomObject as your first row of data. Assuming the bcp arguments are well and good (why -F2 though? And Unix newlines?).
Stripping double quotes requires another an edit to the file. Scrpting Guy has a solution.
foreach($file in $filePath){
write-host $file
$test = import-csv $file | select $desiredColumn
# Overwrite filtereddata.csv, should one exist, with filtered data
$test | export-csv -path .\filtereddata.csv -NoTypeInformation
# Remove doulbe quotes
(gc filtereddata.csv) | % {$_ -replace '"', ''} | out-file filtereddata.csv -Fo -En ascii
$action = bcp <myDatabaseTableName> in filtereddata.csv -T -c -t";" -r"\n" -F2 -S <MyDatabase>
}
Depending on your locale, column separator might be semicolon, colon or something else. Use -Delimiter '<character>' switch to pass whatever you need or change bcp's argument.
Erland's got a helpful page about bulk operations. Also, see Redgate's advice.
Without need to modify the file first, there is an answer here about how bcp can handle quoted data.
BCP in with quoted fields in source file
Essentially, you need to use the -f option and create/use a format file to tell SQL your custom field delimiter (in short, it is no longer a lone comma (,) but it is now (",")... comma with two double quotes. Need to escape the dblquotes and a small trick to handle the first doulbe quote on a line. But it works like a charm.
Also, need the format file to ignore column(s)... just set the destination column number to zero. All with no need to modify the file before load. Good luck!

Copy filename to file with same number

So this is a bit vague to describe so I'll use a picture:
I have around 150 DWG files that have the same content as the SVG's (they're both vector drawing formats converted 1 to 1). I'd like to apply the same filename from the DWG's to the SVG's that start with the same number.
So I end up with:
001_TERMINAL.dwg
001_TERMINAL.svg
002_DIFFUSER.dwg
002_DIFFUSER.svg
etcetera...
I'm using a PC with Windows 10.
How can I implement a solution to my problem?
Thanks!
Assuming it's always 3 digits in the *.svg file names:
set DIR=C:\mydir
#rem Allow repeated setting of !variables! in the FOR loop below
setlocal enabledelayedexpansion
for %%I in (%DIR%\*.dwg) do (
#rem "~n" to pick out just the filename part of the %%I variable
set BASENAME=%%~nI
#rem Substring - batch file style
set PREFIX=!BASENAME:~0,3!
echo !PREFIX! ... !BASENAME!
rename !PREFIX!.svg !BASENAME!.svg
)
Note this will need to be in a batch file for the %%I to work.
The main complication there is using variables in a multi-line FOR loop.
For these you have to use the delayed expansion option, to enable the variable to be expanded each time round, rather than when the line is parsed. This means you have to use !variable! instead of the more normal %variable% in a batch file.
Because you are on Windows, PowerShell is a great candidate to solve this.
For the script below, the length of the numeric part in front of the underscore character doesn't matter, as long as there is an underscore in the .dwg filename, as visible in your question.
Just replace 'c:\folder' here below with the path your files are stored in.
$folderPath = "c:\folder"
$files = Get-ChildItem ([System.IO.Path]::Combine($folderPath, "?*_*.dwg"))
for ($i=0; $i -lt $files.Count; $i++)
{
$file = $files[$i]
$dwgFileName = $file.BaseName
$index = $dwgFileName.IndexOf("_")
$numberPart = $dwgFileName.Substring(0, $index)
$svgFilePath = [System.IO.Path]::Combine($folderPath, "$numberPart.svg")
if ([System.IO.File]::Exists($svgFilePath))
{
Rename-Item -Path $svgFilePath -NewName "$dwgFileName.svg"
}
}
Using bash:
#!/bin/bash
for f in *.dwg; do
IFS='_' read -r -a arr <<< "$f"
mv ${arr[0]}.svg ${f%.*}.svg
done

run powershell in VBA access

I need to change a string in multiple text files
I have written the script below in ACCESS VBA but the error is TYPE MISMATCH
Dim str As String
str = "N=maher"
Call Shell("c:\windows\system32\powershell.exe" - Command("get-content -Path e:\temptest.txt") - Replace(str, "maher", "ali"))
The syntax for calling PowerShell is way off. Suggestion: get it working from the command line yourself first, and then run from Access (an odd choice: it just makes this more complicated).
A PowerShell script to do this (.ps1 file) would need to contain something like:
Get-Content -Path "E:\temptest.txt" | ForEach-Object { $_ -Replace 'maher', 'ali' } | do-something-with-the-updated-content
You need to define:
What you are replacing (you pass N=maher in but then hard code two strings for Replace.
What do to with the strings after doing the replacement (Get-Content only reads files).

Move files with todays modified date to other folder then delete files older then two weeks

Script needs to be run every one week (via task scheduler) and achieve the following:
When the script runs, all files on the date of script run (modified date for files) in certain folder needs to be copied to other folder, when copied, delete everything older 2 weeks from the original folder.
See screenshot for more explanation
I have something with powershell:
$path = "C:\FromFTP\*.*"
$Destination = "C:\Backup"
Foreach($file in (Get-ChildItem $path))
{
If($file.LastWriteTime -gt (Get-Date).adddays(-1).date)
{
Move-Item -Path $file.fullname -Destination $Destination
}
}
But maybe it could be also with vba..
Can someone help me with that? Thank you!
Try this:
$Path = "C:\FromFTP";
$Destination = "C:\Backup";
$Today = (Get-Date).Date;
Get-ChildItem -Path $Path |
Where-Object { ($_.LastWriteTime -ge $Today) -and ($_.LastWriteTime -lt $Today.AddDays(1)) } |
Move-Item -Destination $Destination;
Get-ChildItem -Path $Path |
Where-Object { $_.CreationTime -lt $Today.AddDays(-14) } |
Remove-Item;
The first line gets every file that was last written to (LastWriteTime) sometime between midnight today an before midnight tomorrow. Obviously it's difficult to write files tomorrow, but it makes it easy to run the script for a date in the past, too.
The second line deleted every file that was first created (CreationTime) before 14 days before today. The number of days might be off by one, depending on how you count.

Powershell - listing folders in mulitple places and changing files in those places

I'm trying to set up a script designed to change a bit over 100 placeholders in probably some 50 files. In general I got a list of possible placeholders, and their values. I got some applications that have exe.config files as well as ini files. These applications are stored in c:\programfiles(x86)\ and in d:\In general I managed to make it work with one path, but not with two. I could easily write the code to replace twice, but that leaves me with a lot of messy code and would be harder for others to read.
ls c:\programfiles(x86) -Recurse | where-object {$_.Extension -eq ".config" -or $_.Extension -eq ".ini"} | %{(gc $PSPath) | %{
$_ -replace "abc", "qwe" `
-replace "lkj", "hgs" `
-replace "hfd", "fgd"
} | sc $_PSPath; Write-Host "Processed: " + $_.Fullname}
I've tried to include 2 paths by putting $a = path1, $b = path2, c$ = $a + $b and that seems to work as far as getting the ls command to run in two different places. however, it does not seem to store the path the files are in, and so it will try to replace the filenames it has found in the folder you are currently running the script from. And thus, even if I might be in one of the places where the files is supposed to be, it's not in the other ...
So .. Any idea how I can get Powershell to list files in 2 different places and replace the same variables in both places without haveing to have the code twice ? I thought about putting the code I would have to use twice into a variable, calling it when I needed to instead of writing it again, but it seemed to resolve the code before using it, and that didn't exactly give me results since the data comes from the first part.
If you got a cool pipeline, then every problem looks like ... uhm ... fluids? objects? I have no clue. But anyway, just add another layer (and fix a few problems along the way):
$places = 'C:\Program Files (x86)', 'D:\some other location'
$places |
Get-ChildItem -Recurse -Include *.ini,*.config |
ForEach-Object {
(Get-Content $_) -replace 'abc', 'qwe' `
-replace 'lkj', 'hgs' `
-replace 'hfd', 'fgd' |
Set-Content $_
'Processed: {0}' -f $_.FullName
}
Notable changes:
Just iterate over the list of folders to crawl as the first step.
Doing the filtering directly in Get-ChildItem makes it faster and saves the Where-Object.
-replace can be applied directly to an array, no need for another ForEach-Object there.
If the number of replacements is large you may consider using a hashtable to store them so that you don't have twenty lines of -replace 'foo', 'bar'.