I'm writing a shell script and I need to strip FIND ME out of something like this:
* *[**FIND ME**](find me)*
and assign it to an array. I had the code working flawlessly .. until I moved the script in Solaris to a non-global zone. Here is the code I used before:
objectArray[$i]=`echo $line | nawk -F '*[**|**]' '{print $2}'`
Now Prints:
awk: syntax error near line 1
awk: bailing out near line 1
It was suggested that I try the same command with nawk, but I receive this error now instead:
nawk: illegal primary in regular expression `* *[**|**]` at `*[**|**]`
input record number 1
source line number 1
Also, /usr/xpg4/bin/awk does not exist.
I think you need to be clearer on what you want to get. For me your awk line doesn't 'strip FIND ME out'
echo "* *[**FIND ME**](find me)*" | nawk -F '* *[**|**]' '{print $2}'
[
So it would help if you gave some examples of the input/output you are expecting. Maybe there's a way to do what you want with sed?
EDIT:
From comments you actually want to select "FIND ME" from line, not strip it out.
I guess the dialect of regular expressions accepted by this nawk is different than gawk. So maybe a tool that's better suited to the job is in order.
echo "* *[**FIND ME**](find me)*" | sed -e"s/.*\* \*\[\*\*\(.[^*]*\)\*\*\].*/\1/"
FIND ME
quote your $line variable like this: "$line". If still doesn't work, you can do it another way with nawk, since you only want to find one instance of FIND ME,
$ echo "$line" | nawk '{gsub(/.*\*\[\*\*|\*\*\].*/,"");print}'
FIND ME
or if you are using bash/ksh on Solaris,
$ line="${line#*\[\*\*}"
$ echo "${line%%\*\*\]*}"
FIND ME
Related
Attempting to get the last word of the first line from a file. Not sure why the following command:
send "cat moo.txt | grep QUACK * | awk 'NF>1{print $NF}' meow.txt >> bark.txt "
is getting the error message can't read "NF": no such variable.
I can run the awk 'NF>1{print $NF}' meow.txt >> bark.txt snippet just fine on my machine. Yet, when it runs in my expect script, it gives me that error.
Anyone know why expect doesn't recognize the awk built-in variable?
I think your script is trying to expand the variable $NF with it's value before shooting that command through send. $NF isn't set in your shell since it's internal to awk, which hasn't had a chance to even run yet and so it's balking.
Try escaping that variable so it is treated as a string literal and awk will be able to use it when it comes time for awk to run:
send "cat moo.txt | grep QUACK * | awk 'NF>1{print \$NF}' meow.txt >> bark.txt "
I have a lot of text files in which I would like to find the word 'CASE' and replace it with the related filename.
I tried
find . -type f | while read file
do
awk '{gsub(/CASE/,print "FILENAME",$0)}' $file >$file.$$
mv $file.$$ >$file
done
but I got the following error
awk: syntax error at source line 1 context is >>> {gsub(/CASE/,print <<< "CASE",$0)}
awk: illegal statement at source line 1
I also tried
for i in $(ls *);
do
awk '{gsub(/CASE/,${i},$0)}' ${i} > file.txt;
done
getting an empty output and
awk: syntax error at source line 1 context is >>> {gsub(/CASE/,${ <<<
awk: illegal statement at source line 1
Why awk? sed is what you want:
while read -r file; do
sed -i "s/CASE/${file##*/}/g" "$file"
done < <( find . -type f )
or
while read -r file; do
sed -i.bak "s/CASE/${file##*/}/g" "$file"
done < <( find . -type f )
To create a backup of the original.
You didn't post any sample input and expected output so this is a guess but maybe this is what you want:
find . -type f |
while IFS= read -r file
do
awk '{gsub(/CASE/,FILENAME)} 1' "$file" > "${file}.$$" &&
mv "${file}.$$" "$file"
done
Every change I made to the shell code is important so if you don't understand why I changed any part of it, ask the question.
btw if after making the changes you are still getting the error message:
awk: syntax error at source line 1
awk: illegal statement at source line 1
then you are using old, broken awk (/usr/bin/awk on Solaris). Never use that awk. On Solaris use /usr/xpg4/bin/awk (or nawk if you must).
Caveats: the above will fail if your file name contains newlines or ampersands (&) or escaped digits (e.g. \1). See Is it possible to escape regex metacharacters reliably with sed for details. If any of that is a problem, post some representative sample input and expected output.
print in that first script is the error.
The second argument to gsub is the replacement string not a command.
You want just FILENAME. (Note not "FILENAME" that's a literal string. FILENAME the variable.)
find . -type f -print0 | while IFS= read -d '' file
do
awk '{gsub(/CASE/,FILENAME,$0)} 7' "$file" >"$file.$$"
mv "$file.$$" "$file"
done
Note that I quoted all your variables and fixed your find | read pipeline to work correctly for files with odd characters in the names (see Bash FAQ 001 for more about that). I also fixed the erroneous > in the mv command.
See the answers on this question for how to properly escape the original filename to make it safe to use in the replacement portion of gsub.
Also note that recent (4.1+ I believe) versions of awk have the -i inplace argument.
To fix the second script you need to add the quotes you removed from the first script.
for i in *; do awk '{gsub(/CASE/,"'"${i}"'",$0)}' "${i}" > file.txt; done
Note that I got rid of the worse than useless use of ls (worse than useless because it actively breaks files with spaces or shell metacharacters in the their names (see Parsing ls for more on that).
That command though is somewhat ugly and unsafe for filenames with various characters in them and would be better written as the following though:
for i in *; do awk -v fname="$i" '{gsub(/CASE/,fname,$0)}' "${i}" > file.txt; done
since that will work with filenames with double quotes/etc. in their names correctly whereas the direct variable expansion version will not.
That being said the corrected first script is the right answer.
I've spent countless hours trying to get this work and I think it's time to get some help. I have a 2-column file - let's call it "result.txt" with a list of values like this:
fileA.ext -10.3
fileB.ext -9.8
fileC_1.ext -9.7
fileC_2.ext -9.5
fileD.ext -9.4
fileC_3.ext -9.3
I want to recreate this list using only unique results for each file type, so it should look like this:
fileA.ext -10.3
fileB.ext -9.8
fileC_1.ext -9.7
fileD.ext -9.4
I created a list of files which would be able to do this by using grep or sed to extract the first line containing the matching file:
fileA
fileB
fileC
fileD
We'll call this result2.txt.
I have attempted to write the following c-shell script:
foreach l (`cat result2.txt`)
set name = "$l"
echo "$name"
grep -m1 "$name" result.txt >> result3.txt
end
The output file, "result3.txt" is empty. The script runs perfectly up to the grep command. When I run the grep command outside of the loop, using a line from result2.txt, it works fine. I get the same result using this: sed -n '/"\$name\"/p'
And I think I tried an awk command at some point.
The problem seems to be in getting those programs to recognise the $name or $l variables. I have tried different combinations of " and ' around $name and I have tried adding backslashes: e.g. $\name. Can anyone please tell me what the issue is?
Thanks
Sounds like a job for awk. Use underscore or whitespace as the field separator, and print a line only if the first field has not been seen yet:
awk -F '[_[:space:]]+' '!seen[$1]++' << END
fileA.ext -10.3
fileB.ext -9.8
fileC_1.ext -9.7
fileC_2.ext -9.5
fileD.ext -9.4
fileC_3.ext -9.3
END
fileA.ext -10.3
fileB.ext -9.8
fileC_1.ext -9.7
fileD.ext -9.4
I've just tried in CSH and both your version and the following simplified version just work. Note, no quotation marks at all.
foreach name (`cat result2.txt`)
grep -m1 $name result.txt >>result3.txt
end
Could you please check whether result.txt really contains what you mentioned at the beginning?
cat result.txt
sed -n 's/.*/²&³/;H
$ {x;s/\(.\).*/&\1/
t again
: again
s/²\([^_]\{1,\}_\)\(.*\)\²\1[^³]*³./²\1\2/
t again
s/.\(.*\)./\1/;s/[²³]//g
p
}' YourFile
Use of 2 temporary delimiter ² and ³ due to limitation in \n manipulation
I am using this command
num1=2.2
num2=4.5
result=$(awk 'BEGIN{print ($num2>$num1)?1:0}')
This always returns 0. Whether num2>numl or num1>num2
But when I put the actual numbers as such
result=$(awk 'BEGIN{print (4.5>2.2)?1:0}')
I would get a return value of 1. Which is correct.
What can I do to make this work?
The reason it fails when you use variables is because the awk script enclosed by single quotes is evaluated by awk and not bash: so if you'd like to pass variables you are using from bash to awk, you'll have to specify it with the -v option as follows:
num1=2.2
num2=4.5
result=$(awk -v n1=$num1 -v n2=$num2 'BEGIN{print (n2>n1)?1:0}')
Note that program variables used inside the awk script must not be prefixed with $
Try doing this :
result=$(awk -v num1=2.2 -v num2=4.5 'BEGIN{print (num2 > num1) ? 1 : 0}')
See :
man awk | less +/'^ *-v'
Because $num1 and $num2 are not expanded by bash -- you are using single quotes. The following will work, though:
result=$(awk "BEGIN{print ($num2>$num1)?1:0}")
Note, however, as pointed out in the comments that this is poor coding style and mixing bash and awk. Personally, I don't mind such constructs; but in general, especially for complex things and if you don't remember what things will get evaluated by bash when in double quotes, turn to the other answers to this question.
See the excellent example from #EdMorton below in the comments.
EDIT: Actually, instead of awk, I would use bc:
$num1=2.2
$num2=4.5
result=$( echo "$num2 > $num1" | bc )
Why? Because it is just a bit clearer... and lighter.
Or with Perl (because it is shorter and because I like Perl more than awk and because I like backticks more than $():
result=`perl -e "print ( $num2 > $num1 ) ? 1 : 0;"`
Or, to be fancy (and probably inefficient):
if [ `echo -e "$num1\n$num2" | sort -n | head -1` != "$num1" ] ; then result=0 ; else result=1 ; fi
(Yes, I know)
I had a brief, intensive, 3-year long exposure to awk, in prehistoric times. Nowadays bash is everywhere and can do loads of stuff (I had sh/csh only at that time) so often it can be used instead of awk, while computers are fast enough for Perl to be used in ad hoc command lines instead of awk. Just sayin'.
This might work for you:
result=$(awk 'BEGIN{print ('$num2'>'$num1')?1:0}')
Think of the ''s as like poking holes through the awk command to the underlying bash shell.
I'm tweaking a KSH script and I'm trying to ssh into various hosts and execute a grep command on vfstab that will return a certain line. The problem is, I can't get the following to work below. I'm trying to get the line it returns and append it to a destination file. Is there a better way to do this, ex assign the grep statement to a command variable? The command works fine within the script, but the nested quotations seems to bugger it. Anyways, here's the line:
ssh $user#$host "grep '/var/corefiles' $VFSTAB_LOC | awk '{print $3, $7}' " >> $DEST
This results in:
awk: syntax error near line 1
awk: illegal statement near line one
If there is a better/more correct way to do this please let me know!
You're putting the remote command in double quotes, so the $3 and $7 in the awk body will be substituted. awk probably sees '{print ,}'. Escape the dollar signs in the awk body.
ssh $user#$host "grep '/var/corefiles' $VFSTAB_LOC | awk '{print \$3, \$7}' " >> $DEST
^ ^
I tried below and it worked for me (in ksh) not sure why it would error out in your case
user="username";
host="somehost";
VFSTAB_LOC="result.out";
DEST="/home/username/aaa.out";
echo $DEST;
`ssh $user#$host "grep '/abc/dyf' $VFSTAB_LOC | awk '{print $3, $1}'" >> $DEST`;