Compare and print last column in a file - awk

I have a file
(n34)); 1
Z(n2)); 1
(n52)); 2
(n35)); 3
(n67)); 3
(n19)); 4
(n68)); 4
(n20)); 5
(n36)); 5
(n53)); 5
(n69)); 5
N(n3)); 5
(n54)); 6
(n70)); 7
N(n4)); 7
I want output such that whenever we have same number after semicolon print that lines in single line with field separator as;.
Output should be
(n34)); 1;Z(n2)); 1
(n52)); 2
(n35)); 3;(n67)); 3
(n19)); 4;(n68)); 4
(n20)); 5;(n36)); 5;(n53)); 5;(n69)); 5;N(n3)); 5
(n54)); 6
(n70)); 7;N(n4)); 7
I tried the code below
awk -F';' 'NR == FNR { count[$2]++;next}
In this I am not getting how to print it on same line if same numbers are present.

1st solution: Could you please try following, written and tested with shown samples in GNU awk and considering that your Input_file is sorted by 2nd column.
awk '
BEGIN{ OFS=";" }
prev!=$2{
if(val){ print val }
val=""
}
{
val=(val?val OFS:"")$0
prev=$2
}
END{
if(val){ print val }
}
' Input_file
2nd solution: OR in case your 2nd field is not sorted then try following.
sort -nk2 Input_file |
awk '
BEGIN{ OFS=";" }
prev!=$2{
if(val){ print val }
val=""
}
{
val=(val?val OFS:"")$0
prev=$2
}
END{
if(val){ print val }
}
'
Explanation of awk code:
awk ' ##Starting awk program from here.
BEGIN{ OFS=";" } ##Setting output field separator as semi colon here.
prev!=$2{ ##Checking condition if 2nd field is NOT equal to current 2nd field then do following.
if(val){ print val } ##If val is set then print value of val here.
val="" ##Nullifying val here.
}
{
val=(val?val OFS:"")$0 ##Creating val variable and keep adding values to it with OFS in between their values.
prev=$2 ##Setting current 2nd field to prev to be checked in next line.
}
END{ ##Starting END block for this program from here.
if(val){ print val } ##If val is set then print value of val here.
}
' Input_file ##Mentioning Input_file name here.

Another awk:
$ awk -F\; '{a[$2]=a[$2] (a[$2]==""?"":";") $0}END{for(i in a)print a[i]}' file
Output:
(n34)); 1;Z(n2)); 1
(n52)); 2
(n35)); 3;(n67)); 3
(n19)); 4;(n68)); 4
(n20)); 5;(n36)); 5;(n53)); 5;(n69)); 5;N(n3)); 5
(n54)); 6
(n70)); 7;N(n4)); 7
Explained:
$ awk -F\; '{ # set delimiter (probably useless)
a[$2]=a[$2] (a[$2]==""?"":";") $0 # keep appending where $2s match
}
END { # in the end
for(i in a) # output
print a[i]
}' file
Edit: for(i in a) will produce order that appears random. If you need to order it, you can pipe the output to:
$ awk '...' | sort -t\; -k2n

Perl to the rescue!
perl -ne '($x, $y) = split;
$h{$y} .= "$x $y;";
END { print $h{$_} =~ s/;$/\n/r for sort keys %h }
' -- file
It splits each line on whitespace, stores the value in a hash table %h keyed by the second column, and when the file has been read, it prints the remembered values, sorting them by the second column. We always store the semicolon at the end, so we need to replace the final one with a new line in the output.

I would harness GNU AWK array for that task following way. Let file.txt content be:
(n34)); 1
Z(n2)); 1
(n52)); 2
(n35)); 3
(n67)); 3
(n19)); 4
(n68)); 4
(n20)); 5
(n36)); 5
(n53)); 5
(n69)); 5
N(n3)); 5
(n54)); 6
(n70)); 7
N(n4)); 7
then:
awk '{data[$2]=data[$2] ";" $0}END{for(i in data){print substr(data[i],2)}}' file.txt
output is:
(n34)); 1;Z(n2)); 1
(n52)); 2
(n35)); 3;(n67)); 3
(n19)); 4;(n68)); 4
(n20)); 5;(n36)); 5;(n53)); 5;(n69)); 5;N(n3)); 5
(n54)); 6
(n70)); 7;N(n4)); 7
Explanation: I exploit facts that GNU AWK arrays are lazy and remember order of insertion (latter is not guaranteed in all AWKs). For every line I concatenate whole line to what is under $2 key in array data using ;. If there is not already value stored it is same as empty string. This lead to ; appearing at begins of every record in data so I print it starting at 2nd character. Keep in mind this solution stores everything in data so it might not work well for huge files.
(tested in gawk 4.2.1)

datamash has a similar function built in:
<infile datamash -W groupby 2 collapse 1
Output:
1 (n34));,Z(n2));
2 (n52));
3 (n35));,(n67));
4 (n19));,(n68));
5 (n20));,(n36));,(n53));,(n69));,N(n3));
6 (n54));
7 (n70));,N(n4));

This might work for you (GNU sed):
sed -E ':a;N;s/( \S+)\n(.*\1)$/\1;\2/;ta;P;D' file
Append the following line to the current line.
If both lines end in the same number (string), delete the intervening newline and repeat.
Print/Delete the first line in the pattern space and repeat.

Related

Count rows and columns for multiple CSV files and make new file

I have multiple large comma separated CSV files in a directory. But, as a toy example:
one.csv has 3 rows, 2 columns
two.csv has 4 rows 5 columns
This is what the files look like -
# one.csv
a b
1 1 3
2 2 2
3 3 1
# two.csv
c d e f g
1 4 1 1 4 1
2 3 2 2 3 2
3 2 3 3 2 3
4 1 4 4 1 4
The goal is to make a new .txt or .csv that gives the rows and columns for each:
one 3 2
two 4 5
To get the rows and columns (and dump it into a file) for a single file
$ awk -F "," '{print NF}' *.csv | sort | uniq -c > dims.txt
But I'm not understanding the syntax to get counts for multiple files.
What I've tried
$ awk '{for (i=1; i<=2; i++) -F "," '{print NF}' *.csv$i | sort | uniq -c}'
With any awk, you could try following awk program.
awk '
FNR==1{
if(cols && rows){
print file,rows,cols
}
rows=cols=file=""
file=FILENAME
sub(/\..*/,"",file)
cols=NF
next
}
{
rows=(FNR-1)
}
END{
if(cols && rows){
print file,rows,cols
}
}
' one.csv two.csv
Explanation: Adding detailed explanation for above solution.
awk ' ##Starting awk program from here.
FNR==1{ ##Checking condition if this is first line of each line then do following.
if(cols && rows){ ##Checking if cols AND rows are NOT NULL then do following.
print file,rows,cols ##Printing file, rows and cols variables here.
}
rows=cols=file="" ##Nullifying rows, cols and file here.
file=FILENAME ##Setting FILENAME value to file here.
sub(/\..*/,"",file) ##Removing everything from dot to till end of value in file.
cols=NF ##Setting NF values to cols here.
next ##next will skip all further statements from here.
}
{
rows=(FNR-1) ##Setting FNR-1 value to rows here.
}
END{ ##Starting END block of this program from here.
if(cols && rows){ ##Checking if cols AND rows are NOT NULL then do following.
print file,rows,cols ##Printing file, rows and cols variables here.
}
}
' one.csv two.csv ##Mentioning Input_file names here.
Using gnu awk you can do this in a single awk:
awk -F, 'ENDFILE {
print gensub(/\.[^.]+$/, "", "1", FILENAME), FNR-1, NF-1
}' one.csv two.csv > dims.txt
cat dims.txt
one 3 2
two 4 5
You will need to iterate over all CSVs print the name for each file and the dimensions
for i in *.csv; do awk -F "," 'END{print FILENAME, NR, NF}' $i; done > dims.txt
If you want to avoid awk you can also do it wc -l for lines and grep -o "CSV-separator" | wc -l for fields
I would harness GNU AWK's ENDFILE for this task as follows, let content of one.csv be
1,3
2,2
3,1
and two.csv be
4,1,1,4,1
3,2,2,3,2
2,3,3,2,3
1,4,4,1,4
then
awk 'BEGIN{FS=","}ENDFILE{print FILENAME, FNR, NF}' one.csv two.csv
output
one.csv 3 2
two.csv 4 5
Explanation: ENDFILE is executed after processing every file, I set FS to , assuming that fields are ,-separated and there is not , inside filed, FILENAME, FNR, NF are built-in GNU AWK variables: FNR is number of current row in file, i.e. in ENDFILE number of last row, NF is number of fileds (again of last row). If you have files with headers use FNR-1, if you have rows prepended with row number use NF-1.
edit: changed NR to FNR
Without GNU awk you can use the shell plus POSIX awk this way:
for fn in *.csv; do
cols=$(awk '{print NF; exit}' "$fn")
rows=$(awk 'END{print NR-1}' "$fn")
printf "%s %s %s\n" "${fn%.csv}" "$rows" "$cols"
done
Prints:
one 3 2
two 4 5

selecting columns in awk discarding corresponding header

How to properly select columns in awk after some processing. My file here:
cat foo
A;B;C
9;6;7
8;5;4
1;2;3
I want to add a first column with line numbers and then extract some columns of the result. For the example let's get the new first (line numbers) and third columns. This way:
awk -F';' 'FNR==1{print "linenumber;"$0;next} {print FNR-1,$1,$3}' foo
gives me this unexpected output:
linenumber;A;B;C
1 9 7
2 8 4
3 1 3
but expected is (note B is now the third column as we added linenumber as first):
linenumber;B
1;6
2;5
3;2
[fixed and revised]
To get your expected output, use:
$ awk 'BEGIN {
FS=OFS=";"
}
{
print (FNR==1?"linenumber":FNR-1),$(FNR==1?3:1)
}' file
Output:
linenumber;C
1;9
2;8
3;1
To add a column with line number and extract first and last columns, use:
$ awk 'BEGIN {
FS=OFS=";"
}
{
print (FNR==1?"linenumber":FNR-1),$1,$NF
}' file
Output this time:
linenumber;A;C
1;9;7
2;8;4
3;1;3
Why do you print $0 (the complete record) in your header? And, if you want only two columns in your output, why to you print 3 (FNR-1, $1 and $3)? Finally, the reason why your output field separators are spaces instead of the expected ; is simply that... you did not specify the output field separator (OFS). You can do this with a command line variable assignment (OFS=\;), as shown in the second and third versions below, but also using the -v option (-v OFS=\;) or in a BEGIN block (BEGIN {OFS=";"}) as you wish (there are differences between these 3 methods but they don't matter here).
[EDIT]: see a generic solution at the end.
If the field you want to keep is the second of the input file (the B column), try:
$ awk -F\; 'FNR==1 {print "linenumber;" $2; next} {print FNR-1 ";" $2}' foo
linenumber;B
1;6
2;5
3;2
or
$ awk -F\; 'FNR==1 {print "linenumber",$2; next} {print FNR-1,$2}' OFS=\; foo
linenumber;B
1;6
2;5
3;2
Note that, as long as you don't want to keep the first field of the input file ($1), you could as well overwrite it with the line number:
$ awk -F\; '{$1=FNR==1?"linenumber":FNR-1; print $1,$2}' OFS=\; foo
linenumber;B
1;6
2;5
3;2
Finally, here is a more generic solution to which you can pass the list of indexes of the columns of the input file you want to print (1 and 3 in this example):
$ awk -F\; -v cols='1;3' '
BEGIN { OFS = ";"; n = split(cols, c); }
{ printf("%s", FNR == 1 ? "linenumber" : FNR - 1);
for(i = 1; i <= n; i++) printf("%s", OFS $(c[i]));
printf("\n");
}' foo
linenumber;A;C
1;9;7
2;8;4
3;1;3

How do I sum of the first n rows of another column in bash

For example given
1 4
2 5
3 6
I want to sum up the numbers in the second column and create a new column with it. The new column is 4, 9 (4+5), and 15 (4+5+6)
1 4 4
2 5 9
3 6 15
Could you please try following if you are ok with awk.
awk 'FNR==1{print $0,$2;prev=$2;next} {print $0,$2+prev;prev+=$2}' Input_file
OR
awk 'FNR==1{print $0,$2;prev=$2;next} {prev+=$2;print $0,prev}' Input_file
Explanation: Adding explanation for above code now.
awk ' ##Startig awk program here.
FNR==1{ ##Checking condition if line is first line then do following.
print $0,$2 ##Printing current line with 2nd field here.
prev=$2 ##Creating variable prev whose value is 2nd field of current line.
next ##next will skip all further statements from here.
} ##Closing block for FNR condition here.
{ ##Starting new block here.
prev+=$2 ##Adding $2 value to prev variable value here.
print $0,prev ##Printing current line and prev variable here.
}' Input_file ##mentioning Input_file name here.
PS: Welcome to SO, you need to mention your efforts which you have put in order to solve your problems as we all are here to learn.
this is more idiomatic
$ awk '{print $0, s+=$2}' file
1 4 4
2 5 9
3 6 15
print the current line and the value s, which is incremented with second field, in other words is a rolling sum.
this can be golfed into the following if all values are positive (so no chance of summing to 0), but perhaps too cryptic.
$ awk '$3=s+=$2' file
Another awk..
$ cat john_ward.txt
1 4
2 5
3 6
$ awk ' {$(NF+1)=s+=$NF}1 ' john_ward.txt
1 4 4
2 5 9
3 6 15
$

Select current and previous line if values are the same in 2 columns

Check values in columns 2 and 3, if the values are the same in the previous line and current line( example lines 2-3 and 6-7), then print the lines separated as ,
Input file
1 1 2 35 1
2 3 4 50 1
2 3 4 75 1
4 7 7 85 1
5 8 6 100 1
8 6 9 125 1
4 6 9 200 1
5 3 2 156 2
Desired output
2,3,4,50,1,2,3,4,75,1
8,6,9,125,1,4,6,9,200,1
I tried to modify this code, but not results
awk '{$6=$2 $3 - $p2 $p3} $6==0{print p0; print} {p0=$0;p2=p2;p3=$3}'
Thanks in advance.
$ awk -v OFS=',' '{$1=$1; cK=$2 FS $3} pK==cK{print p0, $0} {pK=cK; p0=$0}' file
2,3,4,50,1,2,3,4,75,1
8,6,9,125,1,4,6,9,200,1
With your own code and its mechanism updated:
awk '(($2=$2) $3) - (p2 p3)==0{printf "%s", p0; print} {p0=$0;p2=$2;p3=$3}' OFS="," file
2,3,4,50,12,3,4,75,1
8,6,9,125,14,6,9,200,1
But it has underlying problem, so better use this simplified/improved way:
awk '($2=$2) FS $3==cp{print p0,$0} {p0=$0; cp=$2 FS $3}' OFS=, file
The FS is needed, check the comments under Mr. Morton's answer.
Why your code fails:
Concatenate (what space do) has higher priority than minus-.
You used $6 to save the value you want to compare, and then it becomes a part of $0 the line.(last column). -- You can change it to a temporary variable name.
You have a typo (p2=p2), and you used $p2 and $p3, which means to get p2's value and find the corresponding column. So if p2==3 then $p2 equals $3.
You didn't set OFS, so even if your code works, the output will be separated by spaces.
print will add a trailing newline\n, so even if above problems don't exist, you will get 4 lines instead of the 2 lines output you wanted.
Could you please try following too.
awk 'prev_2nd==$2 && prev_3rd==$3{$1=$1;print prev_line,$0} {prev_2nd=$2;prev_3rd=$3;$1=$1;prev_line=$0}' OFS=, Input_file
Explanation: Adding explanation for above code now.
awk '
prev_2nd==$2 && prev_3rd==$3{ ##Checking if previous lines variable prev_2nd and prev_3rd are having same value as current line 2nd and 3rd field or not, if yes then do following.
$1=$1 ##Resetting $1 value of current line to $1 only why because OP needs output field separator as comma and to apply this we need to reset it to its own value.
print prev_line,$0 ##Printing value of previous line and current line here.
} ##Closing this condition block here.
{
prev_2nd=$2 ##Setting current line $2 to prev_2nd variable here.
prev_3rd=$3 ##Setting current line $3 to prev_3rd variable here.
$1=$1 ##Resetting value of $1 to $1 to make comma in its values applied.
prev_line=$0 ##Now setting pre_line value to current line edited one with comma as separator.
}
' OFS=, Input_file ##Setting OFS(output field separator) value as comma here and mentioning Input_file name here.

awk to copy and move of file last line to previous line above

In the awk below I am trying to move the last line only, to the one above it. The problem with the below is that since my input file varies (not always 4 lines like in the below), I can not use i=3 everytime and can not seem to fix it. Thank you :).
file
this is line 1
this is line 2
this is line 3
this is line 4
desired output
this is line 1
this is line 2
this is line 4
this is line 3
awk (seems like the last line is being moved, but to i=2)
awk '
{lines[NR]=$0}
END{
print lines[1], lines[NR];
for (i=3; i<NR; i++) {print lines[i]}
}
' OFS=$'\n' file
this is line 1
this is line 2
this is line 4
this is line 3
$ seq 4 | awk 'NR>2{print p2} {p2=p1; p1=$0} END{print p1 ORS p2}'
1
2
4
3
$ seq 7 | awk 'NR>2{print p2} {p2=p1; p1=$0} END{print p1 ORS p2}'
1
2
3
4
5
7
6
try following awk once:
awk '{a[FNR]=$0} END{for(i=1;i<=FNR-2;i++){print a[i]};print a[FNR] ORS a[FNR-1]}' Input_file
Explanation: Creating an array named a with index FNR(current line's number) and keeping it's value to current line's value. Now in END section of awk, starting a for loop from i=1 to i<=FNR-2 why till FNR-2 because you need to swap only last 2 lines here. Once it prints all the lines then simply printing a[FNR](which is last line) and then printing a[FNR-1] with ORS(to print new line).
Solution 2nd: By counting the number of lines in a Input_file and putting them into a awk variable.
awk -v lines=$(wc -l < Input_file) 'FNR==(lines-1){val=$0;next} FNR==lines{print $0 ORS val;next} 1' Input_file
You nearly had it. You just have to change the order.
awk '
{lines[NR]=$0}
END{
for (i=1; i<NR-1; i++) {print lines[i]}
print lines[NR];
print lines[NR-1];
}
' OFS=$'\n' file
I'd reverse the file, swap the first two lines, then re-reverse the file
tac file | awk 'NR==1 {getline line2; print line2} 1' | tac