Selecting records with N fields from a file - scripting

can somebody help me, how can I filter my file, inside the file I have rows with 3, 4, 5 elements, I want print using echo only these which have 5 elements, thanks in advance (I'm talkin about scripts)

cat $file | awk '(NF==5) { print $0}'

Related

Replace value in particular columns in csv file

I would like to replace the values which are > than 20 in columns 5 and 7 to AAA
input file
9179,22.4,-0.1,22.4,2.6,0.1,2.6,39179
9179,98.1,-1.7,98.11,1.9,1.7,2.55,39179
9179,-48.8,0.5,48.8,-1.2,-0.5,1.3,39179
6121,25,0,25,50,0,50,36121
6123,50,0,50,50,0,50,36123
6125,75,0,75,50,0,50,36125
output desired
9179,22.4,-0.1,22.4,2.6,0.1,2.6,39179
9179,98.1,-1.7,98.11,1.9,1.7,2.55,39179
9179,-48.8,0.5,48.8,-1.2,-0.5,1.3,39179
6121,25,0,25,AAA,0,AAA,36121
6123,50,0,50,AAA,0,AAA,36123
6125,75,0,75,AAA,0,AAA,36125
I tried
With this command I replace the values in column 5, how to do it for column 7 too.
awk -F ',' -v OFS=',' '$1 { if ($5>20) $5="AAA"; print}' file
Thanks in advance
here is another take for making the columns set configurable
$ awk -v cols="5,7" 'BEGIN {FS=OFS=","; split(cols,a)}
{for(i in a) if($a[i]>20) $a[i]="AAA"}1' file
9179,22.4,-0.1,22.4,2.6,0.1,2.6,39179
9179,98.1,-1.7,98.11,1.9,1.7,2.55,39179
9179,-48.8,0.5,48.8,-1.2,-0.5,1.3,39179
6121,25,0,25,AAA,0,AAA,36121
6123,50,0,50,AAA,0,AAA,36123
6125,75,0,75,AAA,0,AAA,36125
awk 'BEGIN{FS=OFS=","} $5>20{$5="AAA"} $7>20{$7="AAA"}1' file
9179,22.4,-0.1,22.4,2.6,0.1,2.6,39179
9179,98.1,-1.7,98.11,1.9,1.7,2.55,39179
9179,-48.8,0.5,48.8,-1.2,-0.5,1.3,39179
6121,25,0,25,AAA,0,AAA,36121
6123,50,0,50,AAA,0,AAA,36123
6125,75,0,75,AAA,0,AAA,36125
You can use two {..} for multiple checks and action

summarizing the contents of a text file to an other one using awk

I have a big text file with 2 tab separated fields. as you see in the small example every 2 lines have a number in common. I want to summarize my text file in this way.
1- look for the lines that have the number in common and sum up the second column of those lines.
small example:
ENST00000054666.6 2
ENST00000054666.6_2 15
ENST00000054668.5 4
ENST00000054668.5_2 10
ENST00000054950.3 0
ENST00000054950.3_2 4
expected output:
ENST00000054666.6 17
ENST00000054668.5 14
ENST00000054950.3 4
as you see the difference is in both columns. in the 1st column there is only one repeat of each common and without "_2" and in the 2nd column the values is sum up of both lines (which have common number in input file).
I tried this code but does not return what I want:
awk -F '\t' '{ col2 = $2, $2=col2; print }' OFS='\t' input.txt > output.txt
do you know how to fix it?
Solution 1st: Following awk may help you on same.
awk '{sub(/_.*/,"",$1)} {a[$1]+=$NF} END{for(i in a){print i,a[i]}}' Input_file
Solution 2nd: In case your Input_file is sorted by 1st field then following may help you.
awk '{sub(/_.*/,"",$1)} prev!=$1 && prev{print prev,val;val=""} {val+=$NF;prev=$1} END{if(val){print prev,val}}' Input_file
Use > output.txt at the end of the above codes in case you need the output in a output file too.
If order is not a concern, below may also help :
awk -v FS="\t|_" '{count[$1]+=$NF}
END{for(i in count){printf "%s\t%s%s",i,count[i],ORS;}}' file
ENST00000054668.5 14
ENST00000054950.3 4
ENST00000054666.6 17
Edit :
If the order of the output does matter, below approach using a flag helps :
$ awk -v FS="\t|_" '{count[$1]+=$NF;++i;
if(i==2){printf "%s\t%s%s",$1,count[$1],ORS;i=0}}' file
ENST00000054666.6 17
ENST00000054668.5 14
ENST00000054950.3 4

awk: Search missing value in file

awk newbie here! I am asking for help to solve a simple specific task.
Here is file.txt
1
2
3
5
6
7
8
9
As you can see a single number (the number 4) is missing. I would like to print on the console the number 4 that is missing. My idea was to compare the current line number with the entry and whenever they don't match I would print the line number and exit. I tried
cat file.txt | awk '{ if ($NR != $1) {print $NR; exit 1} }'
But it prints only a newline.
I am trying to learn awk via this small exercice. I am therefore mainly interested in solutions using awk. I also welcome an explanation for why my code does not do what I would expect.
Try this -
awk '{ if (NR != $1) {print NR; exit 1} }' file.txt
4
since you have a solution already, here is another approach, comparing with previous values.
awk '$1!=p+1{print p+1} {p=$1}' file
you positional comparison won't work if you have more than one missing value.
Maybe this will help:
seq $(tail -1 file)|diff - file|grep -Po '.*(?=d)'
4
Since I am learning awk as well
awk 'BEGIN{i=0}{i++;if(i!=$1){print i;i=$1}}' file
4
`awk` explanation read each number from `$1` into array `i` and increment that number list line by line with `i++`, if the number is not sequential, then print it.
cat file
1
2
3
5
6
7
8
9
11
12
13
15
awk 'BEGIN{i=0}{i++;if(i!=$1){print i;i=$1}}' file
4
10
14

awk - skip last line for condition

When I wrote an answer for this question I used the following:
something | sed '$d' | awk '$1>3{print $0}'
e.g.
print only lines where the 1st field is bigger than 3 (awk)
but omit the last line sed '$d'.
This seems for me a bit of duplicate work, surely it is possible to do the above only with awk - without the sed?
I'm an awkdiot - so, can someone suggest a solution?
Here's one way you could do it:
$ printf "%s\n" {1..10} | awk 'NR>1&&p>3{print p}{p=$1}'
4
5
6
7
8
9
Basically, print the first field of the previous line, rather than the current one.
As Wintermute has rightly pointed out in the comments (thanks), in order to print the whole line, you can modify the code to this:
awk 'p { print p; p="" } $1 > 3 { p = $0 }'
This only assigns the contents of contents of the line to p if the first field is greater than 3.

Extracting block of data from a file

I have a problem, which surely can be solved with an awk one-liner.
I want to split an existing data file, which consists of blocks of data into separate files.
The datafile has the following form:
1 1
1 2
1 3
2 1
2 2
2 3
3 1
3 2
3 3
And i want to store every single block of data in a separate file, named - for example - "1.dat", ".dat", "3.dat",...
The problem is, that each block doesn't have a specific line number, they are just delimited by two "new lines".
Thanks in advance,
Jürgen
This should get you started:
awk '{ print > ++i ".dat" }' RS= file.txt
If by two "new lines" you mean, two newline characters:
awk '{ print > ++i ".dat" }' RS="\n\n" file.txt
See how the results differ? Setting a null RS (i.e. the first example) is probably what you're looking for.
Another approach:
awk 'NF != 0 {print > $1 ".dat"}' file.txt