HOw to calculate the throughput from a trace file - awk

for this command
$ grep ^r lab3.tr | grep "2 6" -c | awk "{s+=$6}END{print s}"
I am getting this error
awk: line 1: syntax error at or near }
and what does "h" stand for in the following line of trace file
h 0.106 1 7 cbr 100 ------- 1 1.0 5.0 6 6

You are using soft quotes around your awk-code so the shell interprets $6. As this argument is either empty or an unusable value, you receive said error. Use hard quotes for awk-code instead.
Example:
$echo 1 2 3 | awk "{print $1}"
1 2 3
=> shell interprets $1, however it is empty and thus only print is executed and it outputs the whole record.
$echo 1 2 3 | awk '{print $1}'
1
=> awk interprets $1 as field in record and thus it outputs the first field.

Related

awk Can not Select Column with empty value

i am trying to select a column with its missing value
here is my input file separated by tab
1 2 3
4 5
6
7 8
9
i am trying to select the first column in which output will look like
1
4
7
and the length of my column would be 5 in this case
I have tried
awk '$1!=""{print $1}' ./demo.txt
but it returns
1
4
6
7
9
can anybody help with this I am new in AWK
You can use cut:
$ cut -f 1 file # the default delimiter is a tab
Or with sed:
$ sed 's/[[:blank:]].*$//' file
Or awk:
$ awk '{sub(/[[:blank:]].*$/,"")}1' file
Or:
$ awk 'BEGIN{FS=OFS="\t"} {print $1}' file
All those print the first column and all five lines (blank or not)
Prints:
1
4
7
Tell awk to use a tab (\t) as the input field delimiter (-F):
$ awk -F'\t' '{ print $1 }' demo.txt
1
4
7
If you want to print multiple columns, maintaining the same delimiter for output, another approach using the FS and OFS variables:
$ awk 'BEGIN { FS=OFS="\t" } { print $1,$3 }' demo.txt
1 3
4 5
7
9
With sed something like:
sed 's/^\([^[:blank:]]*\).*/\1/' demo.txt
Using FIELDWIDTHS in gnu-awk you can do this for fixed width separated data:
awk 'BEGIN {FIELDWIDTHS = "4 4 *"} {print $1}' file
1
4
7
For demo purpose:
awk 'BEGIN {FIELDWIDTHS = "4 4 *"} {print NR ":", $1}' file
1: 1
2: 4
3:
4: 7
5:
if they're all single digits in 1st column :
echo \
'1 2 3
4 5
6
7 8
9' |
mawk NF=1 FS=
gcat -n
1 1
2 4
3
4 7
5
that's literally all you need. To play it safe, then do
nawk NF=1 FS='[[:space:]]' # overly-verbose so-called
# "proper" posix form
gawk NF=1 FS='[ \t]' # suffices unless the input
# happens to have uncommon bytes
# like \013 \v or \014 \f
or a very fringe way of fudging NF :
mawk 'NF ^= FS="[ \t]"'

Print last nth columns on line

How can I print the last 4 columns using awk?
The following solution works fine with numbers:
Command:
echo "1 2 3 4 5" | awk '{print $(NF-3),$(NF-2),$(NF-1),$NF}'
Output:
2 3 4 5
However, when I use text with special characters like below, I get a syntax error:
echo "2023-01-03 01:05:06: Table test completed (0 hrs 0 mins 0.009 Secs)" | awk '{print $(NF-3),$(NF-2),$(NF-1),$NF}'
Output:
> " | awk '{print $(NF-2),$(NF-1),$NF}'
mins 0.009 Secs)
awk: The field -2 cannot be less than 0.
The input line number is 2.
The source line number is 1.
What I am expecting to print out is the below for the above example:
(0 hrs 0 mins 0.009 Secs)
How can I achieve this?
1st solution: With your shown samples please try following awk code. Simple explanation would be: Using match function of awk. Using regex \(.*\)$ which will match string from ( to till ) if match is found then printing sub-string which will print only matched value(s) as per OP's requirement.
echo "2023-01-03 01:05:06: Table test completed (0 hrs 0 mins 0.009 Secs" |
awk 'match($0,/\(.*\)$/){print substr($0,RSTART,RLENGTH)}' Input_file
2nd solution: If your Input_file always has "Table test completed" then you can make it more easy then 1st solution, try following then:
echo "Your_string" |
awk -F'[[:space:]]+Table test completed[[:space:]]+' '{print $NF}'
Another awk approach that splits a record using 2+ whitespaces:
echo "2023-01-03 01:05:06: Table test completed (0 hrs 0 mins 0.009 Secs)" |
awk -F '[[:blank:]]{2,}' '{print $NF}'
(0 hrs 0 mins 0.009 Secs)
You can also try this, clear unwanted fields and print string:
$ echo "1 2 3 4 5" | awk '{$1="" ; print}'
2 3 4 5
$ echo "1 2 3 4 5" | awk '{$1=$2=""; print}'
3 4 5
$ echo "1 2 3 4 5" | awk '{$3=$5=""; print}'
1 2 4
either the RS way or the FS way :
echo "2023-01-03 01:05:06: Table test completed" \
" (0 hrs 0 mins 0.009 Secs)" |
gawk 8 RS='^[^(]+' ORS= # only suitable if input is known
# to be 1 line only
mawk NF=NF FS='^[^(]+' OFS= # preferred approach
(0 hrs 0 mins 0.009 Secs)

Apply a sed command to every column of a specific row

I have a tab separated file:
samplename1/filename1 anotherthing/anotherfile asdfgh/hjklñ
2 3 4
5 6 7
I am trying to remove everything after the / just in the header of the file using sed:
sed 's/[/].*//' samplenames.txt
How can I do this for each column of the file? because right now I am removing everything after the first /, but I want to remove just the part of each column after the /.
Actual output:
samplename1
2 3 4
5 6 7
Desired output:
samplename1 anotherthing asdfgh
2 3 4
5 6 7
With GNU sed, you may use
sed -i '1 s,/[^[:space:]]*,,g' samplenames.txt
With FreeBSD sed, you need to add '' after -i.
See the online demo
The -i option will make sed change the file inline. The 1 means only the first line will be modified in the file.
The s,/[^[:space:]]*,,g command means that all occurrences of / followed with 0 or more non-whitespace chars after it will be removed.
Given:
printf "samplename1/filename1\tanotherthing/anotherfile\tasdfgh/hjklñ
2\t3\t4
5\t6\t7" >file # ie, note only one tab between fields...
Here is an POSIX awk to do this:
awk -F $"\t" 'NR==1{gsub("/[^\t]*",""); print; next} 1' file
Prints:
samplename1 anotherthing asdfgh
2 3 4
5 6 7
You can get those to line up with the column command:
awk -F $"\t" 'NR==1{gsub("/[^\t]*",""); print; next} 1' file | column -t
samplename1 anotherthing asdfgh
2 3 4
5 6 7

Multiply every nth field...elegantly

I have a text file with a series of numbers:
1 2 4 2 2 6 3 4 7 4 4 8 2 4 6 5 5 8
I need to have every third field multiplied by 3, so output would be:
1 2 12 2 2 18 3 4 21 4 4 24 2 4 18 5 5 24
Now, I've hammered out a solution already, but I know there's a quicker, more elegant one out there. Here's what I've gotten to work:
xargs -n1 < input.txt | awk '{printf NR%3 ? "%d " : $0*3" ", $1}' > output.txt
I feel that there must be an awk one-liner that can do this?? How can I make awk look at each field (instead of each record), thus not needing the call to xargs to put every field on a different line? Or maybe sed can do it?
Try:
awk '{for (i=3;i<=NF;i+=3)$i*=3; print}' input.txt > output.txt
I have not tested this yet (posted on my iPod). The print command without parameters should print out the whole (partially modified) line. You might have to set OFS=" " in the BEGIN section to get the blank as the separator in the output.
this line would work too:
awk -v RS="\\n| " -v ORS=" " '!(NR%3){$0*=3}7' file

Print all but the first three columns

Too cumbersome:
awk '{print " "$4" "$5" "$6" "$7" "$8" "$9" "$10" "$11" "$12" "$13}' things
awk '{for(i=1;i<4;i++) $i="";print}' file
use cut
$ cut -f4-13 file
or if you insist on awk and $13 is the last field
$ awk '{$1=$2=$3="";print}' file
else
$ awk '{for(i=4;i<=13;i++)printf "%s ",$i;printf "\n"}' file
A solution that does not add extra leading or trailing whitespace:
awk '{ for(i=4; i<NF; i++) printf "%s",$i OFS; if(NF) printf "%s",$NF; printf ORS}'
### Example ###
$ echo '1 2 3 4 5 6 7' |
awk '{for(i=4;i<NF;i++)printf"%s",$i OFS;if(NF)printf"%s",$NF;printf ORS}' |
tr ' ' '-'
4-5-6-7
Sudo_O proposes an elegant improvement using the ternary operator NF?ORS:OFS
$ echo '1 2 3 4 5 6 7' |
awk '{ for(i=4; i<=NF; i++) printf "%s",$i (i==NF?ORS:OFS) }' |
tr ' ' '-'
4-5-6-7
EdMorton gives a solution preserving original whitespaces between fields:
$ echo '1 2 3 4 5 6 7' |
awk '{ sub(/([^ ]+ +){3}/,"") }1' |
tr ' ' '-'
4---5----6-7
BinaryZebra also provides two awesome solutions:
(these solutions even preserve trailing spaces from original string)
$ echo -e ' 1 2\t \t3 4 5 6 7 \t 8\t ' |
awk -v n=3 '{ for ( i=1; i<=n; i++) { sub("^["FS"]*[^"FS"]+["FS"]+","",$0);} } 1 ' |
sed 's/ /./g;s/\t/->/g;s/^/"/;s/$/"/'
"4...5...6.7.->.8->."
$ echo -e ' 1 2\t \t3 4 5 6 7 \t 8\t ' |
awk -v n=3 '{ print gensub("["FS"]*([^"FS"]+["FS"]+){"n"}","",1); }' |
sed 's/ /./g;s/\t/->/g;s/^/"/;s/$/"/'
"4...5...6.7.->.8->."
The solution given by larsr in the comments is almost correct:
$ echo '1 2 3 4 5 6 7' |
awk '{for (i=3;i<=NF;i++) $(i-2)=$i; NF=NF-2; print $0}' | tr ' ' '-'
3-4-5-6-7
This is the fixed and parametrized version of larsr solution:
$ echo '1 2 3 4 5 6 7' |
awk '{for(i=n;i<=NF;i++)$(i-(n-1))=$i;NF=NF-(n-1);print $0}' n=4 | tr ' ' '-'
4-5-6-7
All other answers before Sep-2013 are nice but add extra spaces:
Example of answer adding extra leading spaces:
$ echo '1 2 3 4 5 6 7' |
awk '{$1=$2=$3=""}1' |
tr ' ' '-'
---4-5-6-7
Example of answer adding extra trailing space
$ echo '1 2 3 4 5 6 7' |
awk '{for(i=4;i<=13;i++)printf "%s ",$i;printf "\n"}' |
tr ' ' '-'
4-5-6-7-------
Try this:
awk '{ $1=""; $2=""; $3=""; print $0 }'
The correct way to do this is with an RE interval because it lets you simply state how many fields to skip, and retains inter-field spacing for the remaining fields.
e.g. to skip the first 3 fields without affecting spacing between remaining fields given the format of input we seem to be discussing in this question is simply:
$ echo '1 2 3 4 5 6' |
awk '{sub(/([^ ]+ +){3}/,"")}1'
4 5 6
If you want to accommodate leading spaces and non-blank spaces, but again with the default FS, then it's:
$ echo ' 1 2 3 4 5 6' |
awk '{sub(/[[:space:]]*([^[:space:]]+[[:space:]]+){3}/,"")}1'
4 5 6
If you have an FS that's an RE you can't negate in a character set, you can convert it to a single char first (RS is ideal if it's a single char since an RS CANNOT appear within a field, otherwise consider SUBSEP), then apply the RE interval subsitution, then convert to the OFS. e.g. if chains of "."s separated the fields:
$ echo '1...2.3.4...5....6' |
awk -F'[.]+' '{gsub(FS,RS);sub("([^"RS"]+["RS"]+){3}","");gsub(RS,OFS)}1'
4 5 6
Obviously if OFS is a single char AND it can't appear in the input fields you can reduce that to:
$ echo '1...2.3.4...5....6' |
awk -F'[.]+' '{gsub(FS,OFS); sub("([^"OFS"]+["OFS"]+){3}","")}1'
4 5 6
Then you have the same issue as with all the loop-based solutions that reassign the fields - the FSs are converted to OFSs. If that's an issue, you need to look into GNU awks' patsplit() function.
Pretty much all the answers currently add either leading spaces, trailing spaces or some other separator issue. To select from the fourth field where the separator is whitespace and the output separator is a single space using awk would be:
awk '{for(i=4;i<=NF;i++)printf "%s",$i (i==NF?ORS:OFS)}' file
To parametrize the starting field you could do:
awk '{for(i=n;i<=NF;i++)printf "%s",$i (i==NF?ORS:OFS)}' n=4 file
And also the ending field:
awk '{for(i=n;i<=m=(m>NF?NF:m);i++)printf "%s",$i (i==m?ORS:OFS)}' n=4 m=10 file
awk '{$1=$2=$3="";$0=$0;$1=$1}1'
Input
1 2 3 4 5 6 7
Output
4 5 6 7
echo 1 2 3 4 5| awk '{ for (i=3; i<=NF; i++) print $i }'
Another way to avoid using the print statement:
$ awk '{$1=$2=$3=""}sub("^"FS"*","")' file
In awk when a condition is true print is the default action.
I can't believe nobody offered plain shell:
while read -r a b c d; do echo "$d"; done < file
Options 1 to 3 have issues with multiple whitespace (but are simple).
That is the reason to develop options 4 and 5, which process multiple white spaces with no problem.
Of course, if options 4 or 5 are used with n=0 both will preserve any leading whitespace as n=0 means no splitting.
Option 1
A simple cut solution (works with single delimiters):
$ echo '1 2 3 4 5 6 7 8' | cut -d' ' -f4-
4 5 6 7 8
Option 2
Forcing an awk re-calc sometimes solve the problem (works with some versions of awk) of added leading spaces:
$ echo '1 2 3 4 5 6 7 8' | awk '{ $1=$2=$3="";$0=$0;} NF=NF'
4 5 6 7 8
Option 3
Printing each field formated with printf will give more control:
$ echo ' 1 2 3 4 5 6 7 8 ' |
awk -v n=3 '{ for (i=n+1; i<=NF; i++){printf("%s%s",$i,i==NF?RS:OFS);} }'
4 5 6 7 8
However, all previous answers change all FS between fields to OFS. Let's build a couple of solutions to that.
Option 4
A loop with sub to remove fields and delimiters is more portable, and doesn't trigger a change of FS to OFS:
$ echo ' 1 2 3 4 5 6 7 8 ' |
awk -v n=3 '{ for(i=1;i<=n;i++) { sub("^["FS"]*[^"FS"]+["FS"]+","",$0);} } 1 '
4 5 6 7 8
NOTE: The "^["FS"]*" is to accept an input with leading spaces.
Option 5
It is quite possible to build a solution that does not add extra leading or trailing whitespace, and preserve existing whitespace using the function gensub from GNU awk, as this:
$ echo ' 1 2 3 4 5 6 7 8 ' |
awk -v n=3 '{ print gensub("["FS"]*([^"FS"]+["FS"]+){"n"}","",1); }'
4 5 6 7 8
It also may be used to swap a field list given a count n:
$ echo ' 1 2 3 4 5 6 7 8 ' |
awk -v n=3 '{ a=gensub("["FS"]*([^"FS"]+["FS"]+){"n"}","",1);
b=gensub("^(.*)("a")","\\1",1);
print "|"a"|","!"b"!";
}'
|4 5 6 7 8 | ! 1 2 3 !
Of course, in such case, the OFS is used to separate both parts of the line, and the trailing white space of the fields is still printed.
Note1: ["FS"]* is used to allow leading spaces in the input line.
Cut has a --complement flag that makes it easy (and fast) to delete columns. The resulting syntax is analogous with what you want to do -- making the solution easier to read/understand. Complement also works for the case where you would like to delete non-contiguous columns.
$ foo='1 2 3 %s 5 6 7'
$ echo "$foo" | cut --complement -d' ' -f1-3
%s 5 6 7
$
Perl solution which does not add leading or trailing whitespace:
perl -lane 'splice #F,0,3; print join " ",#F' file
The perl #F autosplit array starts at index 0 while awk fields start with $1
Perl solution for comma-delimited data:
perl -F, -lane 'splice #F,0,3; print join ",",#F' file
Python solution:
python -c "import sys;[sys.stdout.write(' '.join(line.split()[3:]) + '\n') for line in sys.stdin]" < file
For me the most compact and compliant solution to the request is
$ a='1 2\t \t3 4 5 6 7 \t 8\t ';
$ echo -e "$a" | awk -v n=3 '{while (i<n) {i++; sub($1 FS"*", "")}; print $0}'
And if you have more lines to process as for instance file foo.txt, don't forget to reset i to 0:
$ awk -v n=3 '{i=0; while (i<n) {i++; sub($1 FS"*", "")}; print $0}' foo.txt
Thanks your forum.
As I was annoyed by the first highly upvoted but wrong answer I found enough to write a reply there, and here the wrong answers are marked as such, here is my bit. I do not like proposed solutions as I can see no reason to make answer so complex.
I have a log where after $5 with an IP address can be more text or no text. I need everything from the IP address to the end of the line should there be anything after $5. In my case, this is actualy withn an awk program, not an awk oneliner so awk must solve the problem. When I try to remove the first 4 fields using the old nice looking and most upvoted but completely wrong answer:
echo " 7 27.10.16. Thu 11:57:18 37.244.182.218 one two three" | awk '{$1=$2=$3=$4=""; printf "[%s]\n", $0}'
it spits out wrong and useless response (I added [] to demonstrate):
[ 37.244.182.218 one two three]
Instead, if columns are fixed width until the cut point and awk is needed, the correct and quite simple answer is:
echo " 7 27.10.16. Thu 11:57:18 37.244.182.218 one two three" | awk '{printf "[%s]\n", substr($0,28)}'
which produces the desired output:
[37.244.182.218 one two three]
I've found this other possibility, maybe it could be useful also...
awk 'BEGIN {OFS=ORS="\t" }; {for(i=1; i<14; i++) print $i " "; print $NF "\n" }' your_file
Note: 1. For tabular data and from column $1 to $14
Use cut:
cut -d <The character between characters> -f <number of first column>,<number of last column> <file name>
e.g.: If you have file1 containing : car.is.nice.equal.bmw
Run : cut -d . -f1,3 file1 will print car.is.nice
This isn't very far from some of the previous answers, but does solve a couple of issues:
cols.sh:
#!/bin/bash
awk -v s=$1 '{for(i=s; i<=NF;i++) printf "%-5s", $i; print "" }'
Which you can now call with an argument that will be the starting column:
$ echo "1 2 3 4 5 6 7 8 9 10 11 12 13 14" | ./cols.sh 3
3 4 5 6 7 8 9 10 11 12 13 14
Or:
$ echo "1 2 3 4 5 6 7 8 9 10 11 12 13 14" | ./cols.sh 7
7 8 9 10 11 12 13 14
This is 1-indexed; if you prefer zero indexed, use i=s + 1 instead.
Moreover, if you would like to have to arguments for the starting index and end index, change the file to:
#!/bin/bash
awk -v s=$1 -v e=$2 '{for(i=s; i<=e;i++) printf "%-5s", $i; print "" }'
For example:
$ echo "1 2 3 4 5 6 7 8 9 10 11 12 13 14" | ./cols.sh 7 9
7 8 9
The %-5s aligns the result as 5-character-wide columns; if this isn't enough, increase the number, or use %s (with a space) instead if you don't care about alignment.
AWK printf-based solution that avoids % problem, and is unique in that it returns nothing (no return character) if there are less than 4 columns to print:
awk 'NF > 3 { for(i=4; i<NF; i++) printf("%s ", $(i)); print $(i) }'
Testing:
$ x='1 2 3 %s 4 5 6'
$ echo "$x" | awk 'NF > 3 { for(i=4; i<NF; i++) printf("%s ", $(i)); print $(i) }'
%s 4 5 6
$ x='1 2 3'
$ echo "$x" | awk 'NF > 3 { for(i=4; i<NF; i++) printf("%s ", $(i)); print $(i) }'
$ x='1 2 3 '
$ echo "$x" | awk 'NF > 3 { for(i=4; i<NF; i++) printf("%s ", $(i)); print $(i) }'
$