Is there a way to log Awk results? - awk

Let me start off by saying I am not a seasoned programmer by any stretch of the imagination, so please bear with me. :-)
We use the GNUWIN32 awk command in a batch file, like so:
awk -F, -f awk1.txt TDIC-LA-CLM.apc > TDIC-LA-CLM.out
Is there a way to log the results of this command when used like the above example? I tried adding ">> logfile" to the end of the command above but then the command fails.
EDIT: What I would like is for the result code and/or any errors to be loggged. I do not want the output of AWK to go to multiple files, which from what I gather, the tee command does. For example, if you add >> logifle to the end of a DOS move command, the result of that move command is logged in logfile...eg. 1 file(s) moved.
Thanks!

Just so folks can see the answer as an answer rather than buried in the comments, you can redirect the stderr stream of awk to a file like this:
awk ... > TDIC-LA-CLM.out 2> errors.txt
Look here for further info... documentation on ss64 website .

Related

Script to display only comments from /etc/services file

I need to write a bash script that takes service name as a parameter and display only comment that is after hash symbol in /etc/services but I have no idea how to cut only the comment part.
The ,,it's working solution'' for me is to just:
grep "^$1" /etc/services | awk '{print $3,$4 ...
but I don't think this is a good one
I'm searching for something like:
[find the service] -> print only the part from # till the end of the line
I'm still learning so any solution with explanation or just a hint will be very helpful for me.
Chances are this is what you're looking for:
awk -v svc="$1" '($1==svc) && sub(/[^#]+#/,"")' /etc/services
but without sample input/output it's a guess.
The above will work using any awk in any shell on every Unix box.
Try this:
SERVICE_NAME=linuxconf; grep -Po "^$SERVICE_NAME.*# \K.*$" /etc/services
-P tells grep to use perl regex.
-o trims the output so that it only includes the regex match.
\K tells the regex engine to exclude previously matched part of the string from the match, i.e. only the part after \K will be present in the final match.

using literal string for gawk

I thing I'm too close to the problem already that I just can solve it on my own, alltough I'm sure it's easy to solve.
I'm working on a NAS with a SHELL Script for my Raspberry PI which automaticly collects data and distributes it over my other devices. So I decided to include a delete-option, since otherwise it would be a pain in the ass to delete a file, since the raspberry would always copy it right back from the other devices. While the script runs it creats a file: del_tmp_$ip.txt in which are directorys and files to delete from del_$ip.txt (Not del_TMP_$ip.txt).
It looks like this:
test/delete_me.txt
test/hello/hello.txt
pi.txt
I tried to delete the lines viĆ” awk, and this is how far I got by now:
while read r; do
gawk -i inplace '!/^'$r'$/' del_$ip.txt
done <del_tmp_$ip.txt
If the line from del_tmp_$ip.txt tells gawk to delete pi.txt it works without problems, but if the string includes a slash like test/delete_me.txt it doesn't work:
"unexpected newline or end of string"
and it points to the last slash then.
I can't escape the forwardslash with a backwardslash manually, since I don't know whether and how many slashes there will be. Depending on the line of the file which contains the information to be deleted.
I hope you can help me!
Never allow a shell variable to expand to become part of the awk script text before awk evaluates it (which is what you're doing with '!/^'$r'$/') and always quote your shell variables (so the correct shell syntax would have been '!/^'"$r"'$/' IF it hadn't been the wrong approach anyway). The correct syntax to write that command would have been
awk -v r="$r" '$0 !~ "^"r"$"' file
but you said you wanted a string comparison, not regexp so then it'd be simply:
awk -v r="$r" '$0 != r' file
and of course you don't need a shell loop at all:
while read r; do
gawk -i inplace '!/^'$r'$/' del_$ip.txt
done <del_tmp_$ip.txt
you just need 1 awk command:
gawk -i inplace 'NR==FNR{skip[$0];print;next} !($0 in skip)' "del_tmp_$ip.txt" "del_$ip.txt"

bulk renaming files rearranging file names based on delimiter

I have seen questions that are close to this but I have not seen the exact answer I need and can't seem to get my head wrapped around the regex, awk, sed, grep, rename that I would need to make it happen.
I have files in one directory sequentially named from multiple sub directories of a different directory created using find piped to xargs.
Command I used:
find `<dir1>` -name "*.png" | xargs cp -t `<dir2>`
This resulted in the second directory containing duplicate filenames sequentially named as follows:
<name>.png
<name>.png.~1~
<name>.png.~2~
...
<name>.png.~n~
What I would like to do is take all files ending in ~*~ and rename it as follows:
<name>.#.png where the '#" is the number between the "~"s at the end of the file name
Any help would be appreciated.
With Perl's rename (stand alone command):
rename -nv 's/^([^.]+)\.(.+)\.~([0-9]+)~/$1.$3.$2/' *
If everything looks fine remove option -n.
There might be an easier way to this, but here is a small shell script using grep and awk to achieve what you wanted
for i in $(ls|grep ".png."); do
name=$(echo $i|awk -F'png' '{print $1}');
n=$(echo $i|awk -F'~' '{print $2}');
mv $i $name$n.png;
done

How to run awk script on multiple files

I need to run a command on hundreds of files and I need help to get a loop to do this:
have a list of input files /path/dir/file1.csv, file2.csv, ..., fileN.csv
need to run a script on all those input files
script is something like: command input=/path/dir/file1.csv output=output1
I have tried things like:
for f in /path/dir/file*.csv; do command, but how do I get to read and write the new file every time?
Thank you....
Try this, (after changing /path/to/data to the correct path. Same with /path/to/awkscript and other places, pointing to your test data.)
#!/bin/bash
cd /path/to/data
for f in *.csv ; do
echo "awk -f /path/to/awkscript \"$f\" > ${f%.csv}.rpt"
#remove_me awk -f /path/to/awkscript "$f" > ${f%.csv}.rpt
done
make the script "executable" with
chmod 755 myScript.sh
The echo version will help you ensure the script is going to work as expected. You still have to carefully examine that output OR work on a copy of your data so you don't wreck you base-line data.
You could take the output of the last iteration
awk -f /path/to/awkscript myFileLast.csv > myFileLast.rpt
And copy/paste to cmd-line to confirm it will work.
WHen you comfortable that the awk script works as you need, then comment out the echo awk .. line, and uncomment the word #remove_me (and save your bash script).
for f in /path/to/files/*.csv ; do
bname=`basename $f`
pref=${bname%%.csv}
awk -f /path/to/awkscript $f > /path/to/store/output/${pref}_new.txt
done
Hopefully this helps, I am on my blackberry so there may be typos

Retain backslashes with while read loop in multiple shells

I have the following code:
#!/bin/sh
while read line; do
printf "%s\n" $line
done < input.txt
Input.txt has the following lines:
one\two
eight\nine
The output is as follows
onetwo
eightnine
The "standard" solutions to retain the slashes would be to use read -r.
However, I have the following limitations:
must run under #!/bin/shfor reasons of portability/posix compliance.
not all systems
will support the -r switch to read under /sh
The input file format cannot be changed
Therefore, I am looking for another way to retain the backslash after reading in the line. I have come up with one working solution, which is to use sed to replace the \ with some other value (e.g.||) into a temporary file (thus bypassing my last requirement above) then, after reading them in use sed again to transform it back. Like so:
#!/bin/sh
sed -e 's/[\/&]/||/g' input.txt > tempfile.txt
while read line; do
printf "%s\n" $line | sed -e 's/||/\\/g'
done < tempfile.txt
I'm thinking there has to be a more "graceful" way of doing this.
Some ideas:
1) Use command substitution to store this into a variable instead of a file. Problem - I'm not sure command substitution will be portable here either and my attempts at using a variable instead of a file were unsuccessful. Regardless, file or variable the base solution is really the same (two substitutions).
2) Use IFS somehow? I've investigated a little, but not sure that can help in this issue.
3) ???
What are some better ways to handle this given my constraints?
Thanks
Your constraints seem a little strict. Here's a piece of code I jotted down(I'm not too sure of how valuable your while loop is for the other stuffs you would like to do, so I removed it off just for ease). I don't guarantee this code to be robustness. But anyway, the logic would give you hints in the direction you may wish to proceed. (temp.dat is the input file)
#!/bin/sh
var1="$(cut -d\\ -f1 temp.dat)"
var2="$(cut -d\\ -f2 temp.dat)"
iter=1
set -- $var2
for x in $var1;do
if [ "$iter" -eq 1 ];then
echo $x "\\" $1
else
echo $x "\\" $2
fi
iter=$((iter+1))
done
As Larry Wall once said, writing a portable shell is easier than writing a portable shell script.
perl -lne 'print $_' input.txt
The simplest possible Perl script is simpler still, but I imagine you'll want to do something with $_ before printing it.