awk: non-terminated string - awk

I'm trying to run the command below, and its giving me the error. Thoughts on how to fix? I would rather have this be a one line command than a script.
grep "id\": \"http://room.event.assist.com/event/room/event/" failed_events.txt |
head -n1217 |
awk -F/ ' { print $7 } ' |
awk -F\" ' { print "url \= \"http\:\/\/room\.event\.assist\.com\/event\/room\/event\/'{ print $1 }'\?schema\=1\.3\.0\&form\=json\&pretty\=true\&token\=582EVTY78-03iBkTAf0JAhwOBx\&account\=room_event\"" } '
awk: non-terminated string url = "ht... at source line 1
context is
>>> <<<
awk: giving up
source line number 2
The line below exports out a single column of ID's:
grep "id\": \"http://room.event.assist.com/event/room/event/" failed_events.txt |
head -n1217 |
awk -F/ ' { print $7 } '
156512145
898545774
454658748
898432413
I'm looking to get the ID's above into a string like so:
" url = "string...'ID'string"

take a look what you have in last awk :
awk -F\"
' #single start here
{ print " #double starts for print, no ends
url \= \"http\:\/\/room\.event\.assist\.com\/event\/room\/event\/
' #single ends here???
{ print $1 }'..... #single again??? ...
(rest codes)
and you want to print exact {print } out? i don't think so. why you were nesting print ?

Most of the elements of your pipe can be expressed right inside awk.
I can't tell exactly what you want to do with the last awk script, but here are some points:
Your "grep" is really just looking for a string of text, not a
regexp.
You can save time and simplify things if you use awk's
index() function instead of a RE. Output formats are almost always
best handled using printf().
Since you haven't provided your input data, I can't test this code, so you'll need to adapt it if it doesn't work. But here goes:
awk -F/ '
BEGIN {
string="id\": \"http://room.event.assist.com/event/room/event/";
fmt="url = http://example.com/event/room/event/%s?schema=whatever\n";
}
count == 1217 { nextfile; }
index($0, string) {
split($7, a, "\"");
printf(fmt, a[0]);
count++;
}' failed_events.txt
If you like, you can use awk's -v option to pass in the string variable from a shell script calling this awk script. Or if this is a stand-alone awk script (using #! shebang), you could refer to command line options with ARGV.

Related

Search pattern and print it on the line before (awk)

I have this file :
>AX-89948491
CACCTTTT[C/T]ATTTCATTCCTAC
>AX-89940152
AGATGAGA[A/G]TAAAGCTTCTGTC
>AX-89922107
ACAGAAAT[G/T]TATAGATATTACT
I need to find the pattern "[A-Z]/[A-Z]" (it is necessarily present every two lines) ; and put it on the line before like this :
>AX-89948491-[C/T]
CACCTTTT[C/T]ATTTCATTCCTAC
>AX-89940152-[A/G]
AGATGAGA[A/G]TAAAGCTTCTGTC
>AX-89922107-[G/T]
ACAGAAAT[G/T]TATAGATATTACT
I did :
awk 'tmp=/\[[A-Z]\/[A-Z]]/{if (a && a !~ /\[[A-Z]\/[A-Z]]/) print a"-"$tmp; print} {a=$0}' my_file
But that gives the entire line , not the pattern.
Any help?
You could print the previous line plus the current matched part of the pattern, and given that it is present every 2 lines:
awk '
match($0, /\[[A-Z]\/[A-Z]]/) {
m = substr($0, RSTART, RLENGTH)
print prev "-" m ORS $0
}
{prev = $0}
' my_file
Output
>AX-89948491-[C/T]
CACCTTTT[C/T]ATTTCATTCCTAC
>AX-89940152-[A/G]
AGATGAGA[A/G]TAAAGCTTCTGTC
>AX-89922107-[G/T]
ACAGAAAT[G/T]TATAGATATTACT
With your shown samples only, please try following awk program. Here is tac + awk + tac solution. Simple explanation would be using tac to print output in reverse lines order(from bottom to up) sending it to awk program to get [[A-Z]/[A-Z] and saving its matched value to val variable and printing that line, if match function doesn't have any matched regex value then printing that line(basically lines where we need to add [[A-Z]/[A-Z] value) along with - and val value. Now passing this output to tac again to get output in exact same format in which OP has shown us samples.
tac Input_file |
awk '
match($0,/\[[A-Z]\/[A-Z]]/){
val=substr($0,RSTART,RLENGTH)
print
next
}
{
print $0"-"val
}
' | tac

Can't replace string to multi-lined string with sed

I'm trying to replace a fixed parse ("replaceMe") in a text with multi-lined text with sed.
My bash script goes as follows:
content=$(awk'{print $5}' < data.txt | sort | uniq)
target=$(cat install.sh)
text=$(sed "s/replaceMe/$content/" <<< "$target")
echo "${text}"
If content contains one line only, replacing works, but if it contains sevrel lines I get:
sed:... untarminated `s' command
I read about "fetching" multi-lined content, but I couldn't find something about placing multi lined string
You'll have more problems than that depending on the contents of data.txt since sed doesn't understand literal strings (see Is it possible to escape regex metacharacters reliably with sed). Just use awk which does:
text="$( awk -v old='replaceMe' '
NR==FNR {
if ( !seen[$5]++ ) {
new = (NR>1 ? new ORS : "") $5
}
next
}
s = index($0,old) { $0 = substr($0,1,s-1) new substr($0,s+length(old)) }
{ print }
' data.txt install.sh )"

How can I store the length of a line into a var withing awk script?

I have this simple awk script with which I attempt to check the amount of characters in the first line.
if the first line has more of less than 10 characters I want to store the amount
of caracters into a var.
Somehow the first print statement works but storing that result into a var doesn't.
Please help.
I tried removing dollar sign " thelength=(length($0))"
and removing the parenthesis "thelength=length($0)" but it doen't print anything...
Thanks!
#!/bin/ksh
awk ' BEGIN {FS=";"}
{
if (NR==1)
if(length($0)!=10)
{
print(length($0))
thelength=$(length($0))
print "The length of the first line is: ",$thelength;
exit 1;
}
}
END { print "STOP" }' $1
Two issues dealing with mixing ksh and awk scripting ...
no need to make a sub-shell call within awk to obtain the length; use thelength=length($0)
awk variables do not require a leading $ when being referenced; use print ... ,thelength
So your code becomes:
#!/bin/ksh
awk ' BEGIN {FS=";"}
{
if (NR==1)
if(length($0)!=10)
{
print(length($0))
thelength=length($0)
print "The length of the first line is: ",thelength;
exit 1;
}
}
END { print "STOP" }' $1

Removing Quote From Field For Filename Using AWK

I've been playing around with this for an hour trying to work out how to embed the removal of quotes from a specific field using AWK.
Basically, the file encapsulates text in quotes, but I want to use the second field to name the file and split them based on the first field.
ID,Name,Value1,Value2,Value3
1,"AAA","DEF",1,2
1,"AAA","GGG",7,9
2,"BBB","DEF",1,2
2,"BBB","DEF",9,0
3,"CCC","AAA",1,1
What I want to get out are three files, all with the header row named:
AAA [1].csv
BBB [2].csv
CCC [3].csv
I have got it all working, except for the fact that I can't for the life of me work out how to remove the quotes around the filename!!
So, this command does everything (except the file is named with quotes around $2, but I need to do some kind of transformation on $2 before it goes into evname. In the actual file, I want to keep the encapsulating quotes.
awk -F, 'NR==1{h=$0;next}!($1 in files){evname=$2" ["$1"].csv";files[$1]=1;print h>evname}{print > evname}' DataExtract.csv
I've tried to push a gsub into this, but I'm struggling to work out exactly how this should look.
This is I think as close as I have got, but it is just calling everything "2" for $2, I'm not sure if this means I need to do an escape of $2 somehow in the gsub, but trying that doesn't seem to be working, so I'm at a loss as to what I'm doing wrong.
awk -F, 'NR==1{h=$0;next}!($1 in files){evname=gsub(""\","", $2)" - Event ID ["$1"].csv";files[$1]=1;print h>evname}{print > evname}' DataExtract.csv
Any help greatly appreciated.
Thanks in advance!!
Gannon
If I understand what you are attempting correctly, then
awk -F, 'NR==1{h=$0;next}!($1 in files){gsub(/"/, "", $2); evname=$2" ["$1"].csv";files[$1]=1;print h>evname}{print > evname}' DataExtract.csv
should work. That is
NR == 1 {
h = $0;
next
}
!($1 in files) {
stub = $2 # <-- this is the new bit: make a working copy
# of $2 (so that $2 is unchanged and the line
# is not rebuilt with changes for printing),
gsub(/"/, "", stub) # remove the quotes from it, and
evname = stub " [" $1 "].csv" # use it to assemble the filename.
files[$1] = 1;
print h > evname
}
{
print > evname
}
You can, of course, use
evname = stub " - Event ID [" $1 "].csv"
or any other format after the substitution (this one seems to be what you tried to get in your second code snippet).
The gsub function returns the number of substitutions made, not the result of the substitutions; that is why evname=gsub(""\","", $2)" - Event ID ["$1"].csv" does not work.
Things are always clearer with a little white space:
awk -F, '
NR==1 { hdr=$0; next }
!seen[$1]++ {
evname = $2
gsub(/"/,"",evname)
outfile = evname " [" $1 "].csv"
print hdr > outfile
}
{ print > outfile }
' DataExtract.csv
Aside: It's pretty unusual for someone to WANT to create files with spaces in their names given the complexity that introduces in any later scripts you write to process them. You sure you want to do that?
P.S. here's the gawk version as suggested by #JID below
awk -F, '
NR==1 { hdr=$0; next }
!seen[$1]++ {
outfile = gensub(/"/,"","g",$2) " [" $1 "].csv"
print hdr > outfile
}
{ print > outfile }
' DataExtract.csv
Apply the gsub before you make the assignment:
awk -F, 'NR==1{h=$0;next}
!($1 in files){
gsub("\"","",$2); # Add this line
evname=$2" ["$1"].csv";files[$1]=1;print...

How to rewrite a Awk script to process several files instead of one

I am writing a report tool which processes the source files of some application and produce a report table with two columns, one containing the name of the file and the other containing the word TODO if the file contains a call to some deprecated function deprecated_function and DONE otherwise.
I used awk to prepare this report and my shell script looks like
report()
{
find . -type f -name '*.c' \
| xargs -n 1 awk -v deprecated="$1" '
BEGIN { status = "DONE" }
$0 ~ deprecated{ status = "TODO" }
END {
printf("%s|%s\n", FILENAME, status)
}'
}
report "deprecated_function"
The output of this script looks like
./plop-plop.c|DONE
./fizz-boum.c|TODO
This works well but I would like to rewrite the awk script so that it supports several input files instead of just one — so that I can remove the -n 1 argument to xargs. The only solutions I could figure out involve a lot of bookkeeping, because we need to track the changes of FILENAME and the END event to catch each end of file event.
awk -v deprecated="$1" '
BEGIN { status = "DONE" }
oldfilename && (oldfilename != FILENAME) {
printf("%s|%s\n", oldfilename, status);
status = DONE;
oldfilename = FILENAME;
}
$0 ~ deprecated{ status = "TODO" }
END {
printf("%s|%s\n", FILENAME, status)
}'
Maybe there is a cleaner and shorter way to handle this.
I am using FreeBSD's awk and am looking for solutions compatible with this tool.
This will work in any modern awk:
awk -v deprecated="$1" -v OFS='|' '
$0 ~ deprecated{ dep[FILENAME] }
END {
for (i=1;i<ARGC;i++)
print ARGV[i], (ARGV[i] in dep ? "TODO" : "DONE")
}
' file1 file2 ...
Any time you need to produce a report for all files and don't have GNU awk for ENDFILE, you MUST loop through ARGV[] in the END section (or loop through it in BEGIN and populate a different array for END section processing). Anything else will fail if you have empty files.
Your awk script could be something like this:
awk -v deprecated="$1" '
FNR==1 {if(file) print file "|" (f?"TODO":"DONE"); file=FILENAME; f=0}
$0 ~ deprecated {f=1}
END {print file "|" (f?"TODO":"DONE")}' file1.c file2.c # etc.
The logic is fairly similar to your program so hopefully it's all clear. FNR is the record number of the current file, which I'm using to detect the start of a new file. Admittedly there's some repetition in the END block but I don't think it's a big deal. You could always use a function if you wanted to.
Testing it out:
$ cat f1.c
int deprecated_function()
{
// some deprecated stuff
}
$ cat f2.c
int good_function()
{
// some good stuff
}
$ find -name "f?.c" -print0 | xargs -0 awk -v deprecated="deprecated" 'FNR==1 {if(file) print file "|" (f?"TODO":"DONE"); file=FILENAME; f=0} $0 ~ deprecated {f=1} END {print file "|" (f?"TODO":"DONE")}'
./f2.c|DONE
./f1.c|TODO
I have used -print0 and the -0 switch to xargs so that both programs with work file names separated by null bytes "\0" rather than spaces. This means that you won't run into problems with spaces in file names.