print whole variable contents if the number of lines are greater than N - awk

How to print all lines if certain condition matches.
Example:
echo "$ip"
this is a sample line
another line
one more
last one
If this file has more than 3 lines then print the whole variable.
I am tried:
echo $ip| awk 'NR==4'
last one
echo $ip|awk 'NR>3{print}'
last one
echo $ip|awk 'NR==12{} {print}'
this is a sample line
another line
one more
last one
echo $ip| awk 'END{x=NR} x>4{print}'
Need to achieve this:
If this file has more than 3 lines then print the whole file. I can do this using wc and bash but need a one liner.

The right way to do this (no echo, no pipe, no loops, etc.):
$ awk -v ip="$ip" 'BEGIN{if (gsub(RS,"&",ip)>2) print ip}'
this is a sample line
another line
one more
last one

You can use Awk as follows,
echo "$ip" | awk '{a[$0]; next}END{ if (NR>3) { for(i in a) print i }}'
one more
another line
this is a sample line
last one
you can also make the value 3 configurable from an awk variable,
echo "$ip" | awk -v count=3 '{a[$0]; next}END{ if (NR>count) { for(i in a) print i }}'
The idea is to store the contents of the each line in {a[$0]; next} as each line is processed, by the time the END clause is reached, the NR variable will have the line count of the string/file you have. Print the lines if the condition matches i.e. number of lines greater than 3 or whatever configurable value using.
And always remember to double-quote the variables in bash to avoid undergoing word-splitting done by the shell.
Using James Brown's useful comment below to preserve the order of lines, do
echo "$ip" | awk -v count=3 '{a[NR]=$0; next}END{if(NR>3)for(i=1;i<=NR;i++)print a[i]}'
this is a sample line
another line
one more
last one

Another in awk. First test files:
$ cat 3
1
2
3
$ cat 4
1
2
3
4
Code:
$ awk 'NR<4{b=b (NR==1?"":ORS)$0;next} b{print b;b=""}1' 3 # look ma, no lines
[this line left intentionally blank. no wait!]
$ awk 'NR<4{b=b (NR==1?"":ORS)$0;next} b{print b;b=""}1' 4
1
2
3
4
Explained:
NR<4 { # for tghe first 3 records
b=b (NR==1?"":ORS) $0 # buffer them to b with ORS delimiter
next # proceed to next record
}
b { # if buffer has records, ie. NR>=4
print b # output buffer
b="" # and reset it
}1 # print all records after that

Related

Filter logs with awk for last 100 lines

I can filter the last 500 lines using tail or grep
tail --line 500 my_log | grep "ERROR"
What is the equivalent command for using awk
How can I add no of lines in below command
awk '/ERROR/' my_log
awk don't know about end of file until it change of reading file but you can read twhice the file, first time to find the end, second to treat line that are in the scope. You could also keep the X last line in a buffer but it's a bit heavy in memory consuption and process. Notice that the file need to be mentionned twice at the end for this.
awk 'FNR==NR{L=NR-500;next};FNR>=L && /ERROR/{ print FNR":"$0}' my_log my_log
With explanaition
awk '# first reading
FNR==NR{
#last line is this minus 500
LL=NR-500
# go to next line (for this file)
next
}
# at second read (due to previous section filtering)
# if line number is after(included) LL AND error is on the line content, print it
FNR >= LL && /ERROR/ { print FNR ":" $0 }
' my_log my_log
on gnu sed
sed '$-500,$ {/ERROR/ p}' my_log
As you had no sample data to test with, I'll show with just numbers using seq 1 10. This one stores last n records and prints them out in the end:
$ seq 1 10 |
awk -v n=3 '{a[++c]=$0;delete a[c-n]}END{for(i=c-n+1;i<=c;i++)print a[i]}'
8
9
10
If you want to filter the data add for example /ERROR/ before {a[++c]=$0; ....
Explained:
awk -v n=3 '{ # set wanted amount of records
a[++c]=$0 # hash to a
delete a[c-n] # delete the ones outside of the window
}
END { # in the end
for(i=c-n+1;i<=c;i++) # in order
print a[i] # output records
}'
Could you please try following.
tac Input_file | awk 'FNR<=100 && /error/' | tac
In case you want to add number of lines in awk command then try following.
awk '/ERROR/{print FNR,$0}' Input_file

awk to store field length in variable then use in print

In the awk below I am trying to store the length of $5 in a variable il if the condition is met (in the two lines it is) and then add that variable to $3 in the print statement. The two sub statements are to remove the matching from both $5 and $6. The script as is executes and produces the current output. However, il does not seem to be populated and added in the print. It seems close but I'm not sure why the variable isn't being stored? Thank you :)
awk
awk 'BEGIN{FS=OFS="\t"} # define fs and output
FNR==NR{ # process each field in each line of file
if(length($5) < length($6)) { # condition
il=$(length($5))
echo $il
sub($5,"",$6) && sub($6,"",$5) # removing matching
print $1,$2,$3+$il,$3+$il,"-",$6 # print desired output
next
}
}' in
in tab-delimited
id1 1 116268178 GAAA GAAAA
id2 2 228197304 A AATCC
current output tab-delimited
id1 1 116268178 116268178 - A
id2 2 228197304 228197304 - ATCC
desired output tab-delimited
since `$5` is 4 in line 1 that is added to `$3`
since `$5` is 1 in line 2 that is added to `$3`
id1 1 116268181 116268181 - A
id2 2 228197305 228197305 - ATCC
Following awk may help you here.
awk '{$3+=length($4);$3=$3 OFS $3;sub($4,"",$5);$4="-"} 1' Input_file
Please add BEGIN{FS=OFS="\t"} in case your Input_file is TAB delimited and you require output in TAB delimited form too.

How to compare two strings of a file match the strings of another file using AWK?

I possess 2 huge files and I need to count how many entries of file 1 exist on file 2.
The file 1 contains two ids, source and destination, like below:
11111111111111|22222222222222
33333333333333|44444444444444
55555555555555|66666666666666
11111111111111|44444444444444
77777777777777|22222222222222
44444444444444|00000000000000
12121212121212|77777777777777
01010101010101|01230123012301
77777777777777|97697697697697
66666666666666|12121212121212
The file 2 contains the valid id list, which will be used to filter file 1:
11111111111111
22222222222222
44444444444444
77777777777777
00000000000000
88888888888888
66666666666666
99999999999999
12121212121212
01010101010101
What I am struggling to achieve is find a way to count how many entries in file one possess the entry in file 2. Only when both numbers in the same line
exist in file 2 will the line be counted.
On file 2:
11111111111111|22222222222222 — This will be counted because both entries exist on file 2, as well as 77777777777777|22222222222222 because both entries exist on file 2.
33333333333333|44444444444444 — This will not be counted because 33333333333333 does not exist on file 2 and the same goes to 55555555555555|66666666666666, the first does not exist on file 2.
So in the examples I mentioned in the beginning it should count 6, and printing this should be enough, better than editing one file.
awk -F'|' 'FNR == NR { seen[$0] = 1; next }
seen[$1] && seen[$2] { ++count }
END { print count }' file2 file1
Explanation:
1) FNR == NR (number of record in current file equals number of record) is only true for the first input file, which is file2 (the order is important!). Thus for every line of file2, we record the number in seen.
2) For other lines (which is file1, given second on the command line) if the |-separated fields (-F'|') number 1 and 2 were both seen (in file2), we increment count by one.
3) In the END output the count.
Caveat: Every unique number in file2 is loaded into memory. But this also makes it fast instead of having to read through file2 over and over again.
Don't know how to do it in awk but if you are open to a quick-and-dirty bash script that someone can help make efficient, you could try this:
searcher.sh
-------------
#!/bin/bash
file1="$1"
file2="$2"
-- split by pipe
while IFS='|' read -ra line; do
-- find 1st item in file2. If found, find 2nd item in file2
grep -q ${line[0]} "$file2"
if [ $? -eq 0 ]; then
grep -q ${line[1]} "$file2"
if [ $? -eq 0 ]; then
-- print line since both items were found in file2
echo "${line[0]}|${line[1]}"
fi
fi
done < "$file1"
Usage
------
bash searcher.sh file1 file2
Result using your example
--------------------------
$ time bash searcher.sh file1 file2
11111111111111 | 22222222222222
11111111111111 | 44444444444444
77777777777777 | 22222222222222
44444444444444 | 00000000000000
12121212121212 | 77777777777777
66666666666666 | 12121212121212
real 0m1.453s
user 0m0.423s
sys 0m0.627s
That's really slow on my old PC.

awk - skip last line for condition

When I wrote an answer for this question I used the following:
something | sed '$d' | awk '$1>3{print $0}'
e.g.
print only lines where the 1st field is bigger than 3 (awk)
but omit the last line sed '$d'.
This seems for me a bit of duplicate work, surely it is possible to do the above only with awk - without the sed?
I'm an awkdiot - so, can someone suggest a solution?
Here's one way you could do it:
$ printf "%s\n" {1..10} | awk 'NR>1&&p>3{print p}{p=$1}'
4
5
6
7
8
9
Basically, print the first field of the previous line, rather than the current one.
As Wintermute has rightly pointed out in the comments (thanks), in order to print the whole line, you can modify the code to this:
awk 'p { print p; p="" } $1 > 3 { p = $0 }'
This only assigns the contents of contents of the line to p if the first field is greater than 3.

Word Count using AWK

I have file like below :
this is a sample file
this file will be used for testing
this is a sample file
this file will be used for testing
I want to count the words using AWK.
the expected output is
this 2
is 1
a 1
sample 1
file 2
will 1
be 1
used 1
for 1
the below AWK I have written but getting some errors
cat anyfile.txt|awk -F" "'{for(i=1;i<=NF;i++) a[$i]++} END {for(k in a) print k,a[k]}'
It works fine for me:
awk '{for(i=1;i<=NF;i++) a[$i]++} END {for(k in a) print k,a[k]}' testfile
used 1
this 2
be 1
a 1
for 1
testing 1
file 2
will 1
sample 1
is 1
PS you do not need to set -F" ", since its default any blank.
PS2, do not use cat with programs that can read data itself, like awk
You can add sort behind code to sort it.
awk '{for(i=1;i<=NF;i++) a[$i]++} END {for(k in a) print k,a[k]}' testfile | sort -k 2 -n
a 1
be 1
for 1
is 1
sample 1
testing 1
used 1
will 1
file 2
this 2
Instead of looping each line and saving the word in array ({for(i=1;i<=NF;i++) a[$i]++}) use gawk with multi-char RS (Record Separator) definition support option and save each field in array as following(It's a little bit fast):
gawk '{a[$0]++} END{for (k in a) print k,a[k]}' RS='[[:space:]]+' file
Output:
used 1
this 2
be 1
a 1
for 1
testing 1
file 2
will 1
sample 1
is 1
In above gawk command I defines space-character-class [[:space:]]+ (including one or more spaces or \new line character) as record separator.
Here is Perl code which provides similar sorted output to Jotne's awk solution:
perl -ne 'for (split /\s+/, $_){ $w{$_}++ }; END{ for $key (sort keys %w) { print "$key $w{$key}\n"}}' testfile
$_ is the current line, which is split based on whitespace /\s+/
Each word is then put into $_
The %w hash stores the number of occurrences of each word
After the entire file is processed, the END{} block is run
The keys of the %w hash are sorted alphabetically
Each word $key and number of occurrences $w{$key} is printed