DOS filename escaping for use with *nix commands - scripting

I want to escape a DOS filename so I can use it with sed. I have a DOS batch file something like this:
set FILENAME=%~f1
sed 's/Some Pattern/%FILENAME%/' inputfile
(Note: %~f1 - expands %1 to a Fully qualified path name - C:\utils\MyFile.txt)
I found that the backslashes in %FILENAME% are just escaping the next letter.
How can I double them up so that they are escaped?
(I have cygwin installed so feel free to use any other *nix commands)
Solution
Combining Jeremy and Alexandru Nedelcu's suggestions, and using | for the delimiter in the sed command I have
set FILENAME=%~f1
cygpath "s|Some Pattern|%FILENAME%|" >sedcmd.tmp
sed -f sedcmd.tmp inputfile
del /q sedcmd.tmp

This will work. It's messy because in BAT files you can't use set var=`cmd` like you can in unix.
The fact that echo doesn't understand quotes is also messy, and could lead to trouble if Some Pattern contains shell meta characters.
set FILENAME=%~f1
echo s/Some Pattern/%FILENAME%/ | sed -e "s/\\/\\\\/g" >sedcmd.tmp
sed -f sedcmd.tmp inputfile
del /q sedcmd.tmp
[Edited]: I am suprised that it didn't work for you. I just tested it, and it worked on my machine. I am using sed from http://sourceforge.net/projects/unxutils and using cmd.exe to run those commands in a bat file.

You could try as alternative (from the command prompt) ...
> cygpath -m c:\some\path
c:/some/path
As you can guess, it converts backslashes to slashes.

#Alexandru & Jeremy, Thanks for your help. You both get upvotes
#Jeremy
Using your method I got the following error:
sed: -e expression #1, char 8:
unterminated `s' command
If you can edit your answer to make it work I'd accept it. (pasting my solution doesn't count)
Update: Ok, I tried it with UnixUtils and it worked. (For reference, the UnixUtils I downloaded was dated March 1, 2007, and uses GNU sed version 3.02, my Cygwin install has GNU sed version 4.1.5)

Related

How to run an .awk file without 'awk -f' command?

I am new to awk script. I am trying to figure out how to run an awk file without awk -f command. I see people keep saying add "#!bin/awk -f" for the first line of an awk file. But this didn't for my awk. It still gives me "no file or directory" error.
I question is what does "#!bin/awk -f" really mean, and what does it do?
Its #!/bin/awk -f not #!bin/awk. That will probably work, but theres no guaranty. If someone who has awk installed in a different location runs your script, it won't work. What you want is this: #!/usr/bin/env awk -f.
#! is what tells bash what to use to interpret your script. It should go at the very top of your file. It's called a Shebang. Right after that, you put the path to the interpreter.
/usr/bin/env finds where awk is located, and uses that script as the interpreter. So if they installed awk into somewhere else like /usr/local/bin then it'll find it. This probably won't matter for you, but it's a good habit to get into. It's more portable, and can be shared easier.
The -f says that awk is gonna read from a file. You could do awk -f yourfilename.awk in bash, but in the shebang, -f means the rest of the code will be the file it reads from.
I hope this helped. Feel free to ask me any questions if it doesn't work, or isn't clear enough.
UPDATE
If you get the error message:
/usr/bin/env: ‘awk -f’: No such file or directory
/usr/bin/env: use -[v]S to pass options in shebang lines
then change the first line of your script to #!/usr/bin/env -S awk -f (tested with GNU bash, version 4.4.23)
You probably want
#!/bin/awk -f
(The first slash after the #! is important).
This tells unix what program it should use to 'run' the script with.
It is usually called the 'shebang' which comes from hash + bang.
If you want to run your script like this you need to make sure it is executable (chmod +x <script>).
Otherwise you can just run your script by typing the command /bin/awk -f <script>
The Shebang for Awk Explained
#! is the start of a shebang line, which tells the shell which interpreter to use for the script.
/bin/awk is the path to your awk executable. You may need to change this is your awk is installed elsewhere, or if you want to use a different version of awk.
-f is a flag to awk to tell it to interpret the flag's argument as an awk script. In a shebang, it tells some awks to interpret the remainder of the script instead of a file.
Your Shebang is (Probably) Broken
You are using #!bin/awk -f which is unlikely to work, unless you have awk installed as $PWD/bin/awk. You probably meant to use #!/bin/awk instead.
In some instances, passing a flag on the shebang line may not work with your shell or your awk. If you have the rest of the shebang line correct, you might try removing the -f flag and see if that works for you.

How to escape $ in sed over ssh command?

I am trying to create a patch that users can use to remotely edit a file in a pre-defined way using sed, and I could do this manually on each computer, but it would take a long time.
The line I am struggling with is as follows:
host=[hostname]
port=[portnum]
ssh -t $host -p $port "cp ~/file1 ~/file1.bak ; sed -i \"s/fcn1('param1', $2)\n/fcn2('param2'):$zoom\n/g\" ~/file1"
This makes a backup of file1 and then edits a line in the file. I actually want to edit more than one line, but this line demonstrates the problems:
The command works, provided no $ signs are used within the sed command.
I have tried a number of ways of escaping these $ signs but cannot seem to find one that works.
I can use a . wildcard in the find, but obviously not in the replace string.
I would use single quotes for the sed command, in order to avoid expanding the $2, but single quotes are already used inside the command.
Does anyone have any ideas of how to overcome this problem? Thanks in advance for any suggestions!
This should work as well:
ssh -t $host -p $port "cp ~/file1 ~/file1.bak && sed -i \"s/fcn1('param1', \\\$2)/fcn2('param2'):\\\$zoom/g\" file1"
You need 3 back slashes as you have to escape the $ sign in the string passed in the remote bash to sed. And you have to escape that back slash and the $ sign when sending it over via ssh.

Find and Replace text in .sql files

I have many .sql files in subfolders. I am presently manually opening them up, and searching for OLDSERVERNAME, and replacing it with NEWSERVERNAME(I'm doing migration). There must be a faster way to do this. I tried using FART, but I guess I wasn't doing it right.
This is what I tried(in main folder):
fart -i -p -c *.sql OLDSERVERNAME NEWSERVERNAME
Can I perhaps use unix utilities for this purpose?
You can use sed for this. sed stands for S tream Ed itor
sed -i 's/OLDSERVERNAME/NEWSERVERNAME/g' *.sql
-i option will do in-file substitution.
g implies global substitution. So if there are more than one instances of OLDSERVERNAME in one line they will get replaced with NEWSERVERNAME
*.sql will pass all files ending with .sql extension.
Look up sed man page for more details.
On MacOS - I had to add a backup file extension
sed -i '.bak''s/OLDSERVERNAME/NEWSERVERNAME/g' *.sql

why did sqlcmd -v foo="c:\path" eat the "c:"?

I have foo.sql as:
print 'foo=$(foo)'
Then I have in foo.cmd the following shell script:
sqlcmd -i foo.sql -v foo="c:\path"
Running foo.cmd prints:
foo=\path
how do I escape the "c:"? Is dos-shell eating it, or is it sqlcmd?
cmd's argument delimiters include the equal sign. I've seen in other cases (such as bjam.exe) that the entire parameter sequence has to be quoted to work properly.
Try this:
sqlcmd -i foo.sql -v "foo=c:\path"
If it still strips the "c:" portion, I'd focus on sqlcmd. I don't personally have it installed to test with. This is based solely on experience with similar situations.
OK, my mistake. the above does work.
What i did wrong was doing: sqlcmd -i foo.sql -v foo='c:\path'
(single quote, since I tried to pass them as ' ' sql string) that won't work. it will chop the c:
Using another shell causes this.
I just had this when running sqlcmd via powershell. Switched to using cmd.exe and it worked fine
double quotes to escape the ":" and single quotes so that sql treated the variable value as a string. e.g.
sqlcmd -S . -d myDb -i .\test.sql -v pathToFile = "'D:\Temp\temp\My.csv'"
Escape the backslash,
sqlcmd -i foo.sql -v foo="c:\\path"
It's actually your shell eating the \

How to determine the line ending of a file

I have a bunch (hundreds) of files that are supposed to have Unix line endings. I strongly suspect that some of them have Windows line endings, and I want to programmatically figure out which ones do.
I know I can just run flip -u or something similar in a script to convert everything, but I want to be able to identify those files that need changing first.
You can use the file tool, which will tell you the type of line ending. Or, you could just use dos2unix -U which will convert everything to Unix line endings, regardless of what it started with.
You could use grep
egrep -l $'\r'\$ *
Something along the lines of:
perl -p -e 's[\r\n][WIN\n]; s[(?<!WIN)\n][UNIX\n]; s[\r][MAC\n];' FILENAME
though some of that regexp may need refining and tidying up.
That'll output your file with WIN, MAC, or UNIX at the end of each line. Good if your file is somehow a dreadful mess (or a diff) and has mixed endings.
Here's the most failsafe answer. Stimms answer doesn account for subdirectories and binary files
find . -type f -exec file {} \; | grep "CRLF" | awk -F ':' '{ print $1 }'
Use file to find file type. Those with CRLF have windows return characters. The output of file is delimited by a :, and the first field is the path of the file.
Unix uses one byte, 0x0A (LineFeed), while windows uses two bytes, 0x0D 0x0A (Carriage Return, Line feed).
If you never see a 0x0D, then it's very likely Unix. If you see 0x0D 0x0A pairs then it's very likely MSDOS.
Windows use char 13 & 10 for line ending, unix only one of them ( i don't rememeber which one ). So you can replace char 13 & 10 for char 13 or 10 ( the one, which use unix ).
When you know which files has Windows line endings (0x0D 0x0A or \r \n), what you will do with that files? I supose, you will convert them into Unix line ends (0x0A or \n). You can convert file with Windows line endings into Unix line endings with sed utility, just use command:
$> sed -i 's/\r//' my_file_with_win_line_endings.txt
You can put it into script like this:
#!/bin/bash
function travers()
{
for file in $(ls); do
if [ -f "${file}" ]; then
sed -i 's/\r//' "${file}"
elif [ -d "${file}" ]; then
cd "${file}"
travers
cd ..
fi
done
}
travers
If you run it from your root dir with files, at end you will be sure all files are with Unix line endings.