Product of two columsn, added to the next row, and so forth - awk

I am attempting to produce the product each row in a multi-row file and add it the subsequent row and so fort.
So I would essentially go with
awk '{print $1 "/t" ($2 * $3)' filename > temp
how would this be looped for each unique id in column 1? sample data below.
SAMPLE DATA
name1 14 10
name1 48 10
name2 23 98
name3 90 28
name4 83 6
name4 5 3
name3 15 7

If I am reading it correctly you need to have the multiplication of 2nd and 3rd column in each row and then add them to same 1st column values if this is the case then following may help you here.
awk '{a[$1]=(a[$1]?a[$1]+($2 * $3):$2*$3)} END{for(i in a){print i,a[i]}}' Input_file
Solution 2nd: You could use sort and awk in case you need output in sorted order.
sort -k1 Input_file |
awk '
prev!=$1 && prev{
print prev,total
total=prev=""
}
{
total+=($2*$3)
prev=$1
}
END{
if(prev && total){
print prev,total
}
}'
Solution 3rd: In case you need to have the same order of output as like Input_file's first field then following may help.
awk '
!a[$1]++{
b[++count]=$1
}
{
c[$1]=(c[$1]?c[$1] + ($2*$3):($2*$3))
}
END{
for(i=1;i<=count;i++){
print b[i],c[b[i]]
}
}' Input_file
Output will be as follows.
name1 620
name2 2254
name3 2625
name4 513

Related

AWK command of add column to count of grouped column

I have a data set tab separated like this: (file.txt)
A B
1 111
1 111
1 112
1 113
1 113
1 113
1 113
2 113
2 113
2 113
I want to add a new C column to show count of grouped A and B
Desired output:
A B C
1 111 2
1 111 2
1 112 1
1 113 4
1 113 4
1 113 4
1 113 4
2 113 3
2 113 3
2 113 3
I have tried this:
awk 'BEGIN{ FS=OFS="\t" }
NR==FNR{
if (FNR>1) a[$2]+=$3
next
}
{ $(NF+1)=(FNR==1 ? "C" : a[$2]) }
1
' file.txt file.txt > file2.txt
Could you please try following, With shown samples.
awk '
FNR==NR{
count[$1,$2]++
next
}
FNR==1{
print $0,"C"
next
}
{
print $0,count[$1,$2]
}
' Input_file Input_file
Add BEGIN{FS=OFS="\t"} in above code in case your data is tab delimited.
Explanation: Adding detailed explanation for above.
awk ' ##Starting awk program from here.
FNR==NR{ ##Checking condition if FNR==NR which will be TRUE when first time Input_file being read.
count[$1,$2]++ ##Creating count with index of 1st and 2nd field and increasing its count.
next ##next will skip further statements from here.
}
FNR==1{ ##Checking condition if this is 1st line then do following.
print $0,"C" ##Printing current line with C heading here.
next ##next will skip further statements from here.
}
{
print $0,count[$1,$2] ##Printing current line along with count with index of 1st and 2nd field.
}
' Input_file Input_file ##Mentioning Input_file(s) here.
Problem in OP's attempt: OP was adding $3 in values(though logic looked ok) but there is NO 3rd field present in Input_file so that's why it was not working. Also OP was using index as 2nd field but as per OP's comments it should be 1st and 2nd fields.
You might consider using GNU Datamash, e.g.:
datamash -HW groupby 1,2 count 1 < file.txt | column -t
Output:
GroupBy(A) GroupBy(B) count(A)
1 111 2
1 112 1
1 113 4
2 113 3

Print the data from second column [duplicate]

This question already has answers here:
Pipe symbol | in AWK field delimiter
(3 answers)
Closed 2 years ago.
I am having a file called fixed.txt as shown below:
Column1 | Column2 | Column3
Total expected ratio | 53 | 68
Total number|count number | 54 | 72
reset|print|total | 64 | 84
I am trying to print the output column2 as below:
Fixed.txt:
53
54
64
I tried the below script but I am not getting the desired output.
#!/bin/bash
for d in fixed.txt
do
awk -F" | "
NR>1
awk '{ print $2 }' fixed.txt
done
Can we use pipeline(|) and space at a time as a delimiter?
1st solution: Could you please try following based on your shown samples it's written. Written and tested in
https://ideone.com/WoF40j
awk '
BEGIN{
FS="|"
}
{
print $(NF-1)+0
}
' Input_file
2nd solution: Use space and | as a field delimiter one could run following.
awk -F'[[:blank:]]+\\|[[:blank:]]+' '{print $(NF-1)}' Input_file
OR
awk -F' +\\| +' '{print $(NF-1)}' Input_file

How to compare 2 files having multiple occurances of a number and output the additional occurance?

Currently i am using a awk script to compare 2 files having random numbers in non sequential order.
It works perfect , but there is just one future condition i would like to fulfill.
Current awk function
awk '
{
$0=$0+0
}
FNR==NR{
a[$0]
next
}
($0 in a){
b[$0]
next
}
{ print }
END{
for(j in a){
if(!(j in b)){ print j }
}
}
' compare1.txt compare2.txt
What the the function accomplishes currently ?
It outputs list of all the numbers which are present in compare1 but not in compare 2 and vice versa
If any number has zero in its prefix, ignore zeros while comparing ( basically the absolute value of number must be different to be treated as a mismatch ) Example - 3 should be considered matching with 003 and 014 should be considered matching with 14, 008 with 8 etc
As required It also considers a number matched even if they are not necessarily on the same line in both files
Required additional condition
In its current form , this functions works in such a way that if a file has multiple occurances of a number and other file has even one occurance of that same number , it considers the number matched for both repetitions.
I need the awk function to be edited to output any additional occurrence of a number
cat compare1.txt
57
11
13
3
889
014
91
775
cat compare2.txt
003
889
13
14
57
12
90
775
775
Expected output
12
90
11
91
**775**
The number marked here at end is currently not being shown in output in my present awk function ( 2 occurances - 1 occurrence )
As mentioned at https://stackoverflow.com/a/62499047/1745001, this is the job that comm exists to do:
$ comm -3 <(awk '{print $0+0}' compare1.txt | sort) <(awk '{print $0+0}' compare2.txt | sort)
11
12
775
90
91
and to get rid of the white space:
$ comm -3 <(awk '{print $0+0}' compare1.txt | sort) <(awk '{print $0+0}' compare2.txt | sort) |
awk '{print $1}'
11
12
775
90
91
you just need to count the occurrences and account for it in matching...
$ awk '{k=$0+0}
NR==FNR {a[k]++; next}
!(k in a && a[k]-->0);
END {for(k in a) while(a[k]-->0) print k}' file1 file2
12
90
775
11
91
note that as in your original script there is no absolute value comparison, which you can add easily by just changing k in the first line.

dividing a data file to new files based on data on a particular column

I have a data file (data.txt) as shown below:
0 25 10 25000
1 25 7 18000
1 25 9 15000
0 20 9 1000
1 20 8 800
0 20 8 900
0 50 10 4000
0 50 5 2500
1 50 10 5000
I want to copy the rows with same value in the second column to separate files. I want to get following three files:
data.txt_25
0 25 10 25000
1 25 7 18000
1 25 9 15000
data.txt_20
0 20 9 1000
1 20 8 800
0 20 8 900
data.txt_50
0 50 10 4000
0 50 5 2500
1 50 10 5000
I have just started learning awk. I have tried the following bash script:
1 #!/bin/bash
2
3 for var in 20 25 50
4 do
5 awk -v var="$var" '$2==var { print $0 }' data.txt > data.txt_$var
6 done
While the bash script does what I want it to do, it is time consuming as I have to put the values of second column data in line 3 manually.
So I would like to do this using awk. How can I achieve this using awk ?
Thanks in advance.
Could you please try following, this considers that your 2nd column numbers are NOT in sorted form.
sort -k2 Input_file |
awk '
prev!=$2{
close(output_file)
output_file="data.txt_"$2
}
{
print > (output_file)
prev=$2
}'
In case your Input_file's 2nd column is sorted then no need to use sort you could directly use like:
awk '
prev!=$2{
close(output_file)
output_file="data.txt_"$2
}
{
print > (output_file)
prev=$2
}' Input_file
Explanation: Adding a detailed explanation for above.
sort -k2 Input_file | ##Sorting Input_file with respect to 2nd column then passing output to awk
awk ' ##Starting awk program from here.
prev!=$2{ ##Checking if prev variable is NOT equal to $2 then do following.
close(output_file) ##Closing output_file in back-end to avoid "too many files opened" errors.
output_file="data.txt_"$2 ##Creating variable output_file to data.txt_ with $2 here.
}
{
print > (output_file) ##Printing current line to output_file here.
prev=$2 ##Setting variable prev to $2 here.
}'
For the given sample, you can also use this:
awk -v RS= '{f = "data.txt_" $2; print > f; close(f)}' data.txt
-v RS= paragraph mode, empty lines are used to separate input records
f = "data.txt_" $2 construct filename using second column value (by default awk split input record on spaces/tabs/newlines)
print > f write input record contents to filename
close(f) close the file

Compare two files and append the values, leave the mismatches as such in the output file

I'm trying to match two files,file1.txt(50,000 lines), file2.txt(55,000 lines). I want to campare file2 to file 1 extract the values of column 2 and 3 and leave the mismatches as such. Output file must contain all the ids from file2 i.e., it should have 55000 lines. Note: All the ids in file 1 are not present in file2. i.e the actual matches could be less than 50,000.
file1.txt
ab1 12 345
ab2 9 456
gh67 6 987
file2.txt
ab2 0 0
ab1 0 345
nh7 0 0
gh67 6 987
Output
ab2 9 456
ab1 12 345
nh7 0 0
gh67 6 987
This is what i tried but it only print the matches (so instead of 55,000 lines i have 49,000 lines in my output file)
awk "NR==FNR {f[$1]=$0;next}$1 in f{print f[$1],$0}" file1.txt file2.txt >output.txt
This awk script will work
NR == FNR {
a[$1] = $0
next
}
$1 in a {
split(a[$1], b)
print $1, (b[2] == $2 ? $2 : b[2]), (b[3] == $3 ? $3 : b[3])
}
!($1 in a)
If you save this as a.awk and run
awk -f a.awk foo.txt foo1.txt
This will output
ab2 9 456
ab1 12 345
nh7 0 0
gh67 6 987