Split a file after a certain number of unique entries - awk

Given a tab-delimited file:
A 12380
A 123801
A 1209
A 2035
A 4930
A 2903
B 2085
B 203801
B 240083
B 12308
B 12399
C 120303
C 1238058
C 235
D 55674
D 99683
D 2391095
D 12958
D 23804
D 5769
E 479903
E 28075
E 2310
E 6784
F 4789
F 23458
F 8976
G 9007
H 1203
H 12909
I want to split this after a certain number of unique entries have been seen - from a specific column. As an example, split the above file after every 3 unique entries in the first column. Producing 3 files:
A 12380
A 123801
A 1209
A 2035
A 4930
A 2903
B 2085
B 203801
B 240083
B 12308
B 12399
C 120303
C 1238058
C 235
D 55674
D 99683
D 2391095
D 12958
D 23804
D 5769
E 479903
E 28075
E 2310
E 6784
F 4789
F 23458
F 8976
G 9007
H 1203
H 12909
I have this so far:
awk -F"\t" 'BEGIN { count=0; filename=1 }; x[$1]++==0 {count++}; count==3 { count=1; filename++}; {print >> filename".txt"; close(filename".txt");}' file
However when running this on the terminal, I get the error:
awk: syntax error at source line 1
context is
BEGIN { count=0; filename=1 }; x[$1]++==0 {count++}; count==4 { count=1; filename++}; {print >> >>> filename".txt" <<<
awk: illegal statement at source line 1
Why?
EDIT: Removing the ".txt" fixes this - however it is super slow. Any help?

Could you please try following(tested with given samples).
awk -v count=1 '
prev!=$1 && prev{
count++
delete a[prev]
}
count==4 && !a[$1]++{
count=1
print ""
}
{
prev=$1
}
1
' Input_file
Explanation:
awk -v count=1 ' ##Starting awk program here, mentioning variable count whose value is 1.
prev!=$1 && prev{ ##Checking condition where prev NOT equal to current $1 and variable prev is NOT NULL then do following.
count++ ##Increment variable count with 1 here.
delete a[prev] ##Deleting array a value whose index is prev variable here.
}
count==4 && !a[$1]++{ ##Checking condition if count==4 and array a does not have any previous occurrence of $1 then do following.
count=1 ##Setting value of count to 1 here.
print "" ##Printing NULL line here.
}
{
prev=$1 ##Setting variable prev to $1 of current line.
}
1
' Input_file ##Mentioning Input_file name here.
EDIT: To take output into output file try following.
awk -v count=1 -v file_count=1 '
BEGIN{
file=file_count".txt"
}
prev!=$1 && prev{
count++
delete a[prev]
}
count==4 && !a[$1]++{
count=1
close(file)
file_count++
file=file_count".txt"
}
{
prev=$1
}
{
print $0 > (file)
}
' Input_file

$ awk '$1!=(p""){p=$1;u++}
u>3{close(n++".txt");u=1}
{print >(n".txt")}' n=1 file
$ cat 1.txt
A 12380
A 123801
A 1209
A 2035
A 4930
A 2903
B 2085
B 203801
B 240083
B 12308
B 12399
C 120303
C 1238058
C 235
$ cat 2.txt
D 55674
D 99683
D 2391095
D 12958
D 23804
D 5769
E 479903
E 28075
E 2310
E 6784
F 4789
F 23458
F 8976
$ cat 3.txt
G 9007
H 1203
H 12909

Related

Counting the number of unique values based on more than two columns in bash

I need to modify the below code to work on more than one column.
Counting the number of unique values based on two columns in bash
awk ' ##Starting awk program from here.
BEGIN{
FS=OFS="\t"
}
!found[$0]++{ ##Checking condition if 1st and 2nd column is NOT present in found array then do following.
val[$1]++ ##Creating val with 1st column inex and keep increasing its value here.
}
END{ ##Starting END block of this progra from here.
for(i in val){ ##Traversing through array val here.
print i,val[i] ##Printing i and value of val with index i here.
}
}
' Input_file ##Mentioning Input_file name here.
Table to count how many of each double (all DIS)
patient sex DISa DISb DISc DISd DISe DISf DISg DISh DISi
patient1 male 550.1 550.5 594.1 594.3 594.8 591 1019 960.1 550.1
patient2 female 041 208 250.2 276.14 426.32 550.1 550.5 558 041
patient3 female NA NA NA NA NA NA NA 041 NA
The output I need is:
550.1 3
550.5 2
594.1 1
594.3 1
594.8 1
591 1
1019 1
960.1 1
550.1 1
041 3
208 1
250.2 1
276.14 1
426.32 1
558 1
Consider this awk:
awk -v OFS='\t' 'NR > 1 {for (i=3; i<=NF; ++i) if ($i+0 == $i) ++fq[$i]} END {for (i in fq) print i, fq[i]}' file
276.14 1
960.1 1
594.3 1
426.32 1
208 1
041 3
594.8 1
550.1 3
591 1
1019 1
558 1
550.5 2
250.2 1
594.1 1
A more readable form:
awk -v OFS='\t' '
NR > 1 {
for (i=3; i<=NF; ++i)
if ($i+0 == $i)
++fq[$i]
}
END {
for (i in fq)
print i, fq[i]
}' file
$i+0 == $i is a check for making sure column value is numeric.
If the ordering must be preserved, then you need an additional array b[] to keep the order each number is encountered, e.g.
awk '
BEGIN { OFS = "\t" }
FNR > 1 {
for (i=3;i<=NF;i++)
if ($i~/^[0-9]/) {
if (!($i in a))
b[++n] = $i;
a[$i]++
}
}
END {
for (i=1;i<=n;i++)
print b[i], a[b[i]]
}' file
Example Use/Output
$ awk '
> BEGIN { OFS = "\t" }
> FNR > 1 {
> for (i=3;i<=NF;i++)
> if ($i~/^[0-9]/) {
> if (!($i in a))
> b[++n] = $i;
> a[$i]++
> }
> }
> END {
> for (i=1;i<=n;i++)
> print b[i], a[b[i]]
> }' patients
550.1 3
550.5 2
594.1 1
594.3 1
594.8 1
591 1
1019 1
960.1 1
041 3
208 1
250.2 1
276.14 1
426.32 1
558 1
Let me know if you have further questions.
Taking complete solution from above 2 answers(#anubhava and #David) with all respect, just adding a little tweak(of applying check for integer value here as per shown samples of OP) to their solutions and adding 2 solutions here. Written and tested with shown samples only.
1st solution: If order doesn't matter in output try:
awk -v OFS='\t' '
NR > 1 {
for (i=3; i<=NF; ++i)
if (int($i))
++fq[$i]
}
END {
for (i in fq)
print i, fq[i]
}' Input_file
2nd solution: If order matters based on David's answer try.
awk '
BEGIN { OFS = "\t" }
FNR > 1 {
for (i=3;i<=NF;i++)
if (int($i)) {
if (!($i in a))
b[++n] = $i;
a[$i]++
}
}
END {
for (i=1;i<=n;i++)
print b[i], a[b[i]]
}' Input_file
Using GNU awk for multi-char RS:
$ awk -v RS='[[:space:]]+' '$0+0 == $0' file | sort | uniq -c
3 041
1 1019
1 208
1 250.2
1 276.14
1 426.32
3 550.1
2 550.5
1 558
1 591
1 594.1
1 594.3
1 594.8
1 960.1
If the order of fields really matters just pipe the above to awk '{print $2, $1}'.

Concatenate multiple files and create new file based on the value

I have more than 50 files as like this
dell.txt
Name Id Year Value
xx.01 45 1990 2k
SS.01 89 2000 6.0k
Hp.txt
Name Id Year Value
xx.01 48 1994 21k
SS.01 80 2001 2k
Apple.txt
Name Id Year Value
xx.02 45 1990 20k
SS.01 89 2000 60k
kp.03 23 1996 530k
I just need to make a new file as like this
Name Id Year dell Hp Apple
xx.01 45 1990 2k 0 0
xx.01 48 1994 0 21k 0
xx.02 45 1990 0 0 20k
SS.01 80 2001 0 2k 0
SS.01 89 2000 6.0k 0 60k
kp.03 23 1996 0 0 530k
I tried with paste for concatenation but it is adding different order. any other way using awk? I used flowing code:
$ awk ' FNR==1{ if (!($0 in h)){file=h[$0]=i++} else{file=h[$0];next} } {print >> (file)} ' *.txt –
Could you please try following, written and tested with GNU awk and is giving output in sorted format.
awk '
FNR==1{
tfile=FILENAME
sub(/\..*/,"",tfile)
file=(file?file OFS:"")tfile
header=($1 FS $2 FS $3)
next
}
{
a[$1 FS $2 FS $3 "#" FILENAME]=$NF
}
END{
print header,file
for(i in a){
oldi=i
split(i,arr,"#")
sub(/#.*/,"",i)
printf("%s ",i)
for(i=1;i<=ARGIND;i++){
val=(val?val OFS:"")((arr[1] "#" ARGV[i]) in a?a[oldi]:0)
}
printf("%s\n",val)
val=""
}
}
' dell.txt Hp.txt Apple.txt | sort -k1 | column -t
Explanation: Adding detailed explanation for above.
awk ' ##Starting awk program from here.
FNR==1{ ##Checking if this is 1st line.
tfile=FILENAME
sub(/\..*/,"",tfile)
file=(file?file OFS:"")tfile ##Creating file which has all Input_file names in it.
header=($1 FS $2 FS $3) ##Header has 3 fields in it from 1st line.
next ##next will skip all further statements from here.
}
{
a[$1 FS $2 FS $3 "#" FILENAME]=$NF ##Creating a with index of 1st, 2nd, 3rd fields # Input_file name and has value as last field.
}
END{ ##Starting END block of this awk program from here.
print header,file ##Printing header and file variables here.
for(i in a){ ##Traversing through a here.
oldi=i ##Setting i value as oldi here.
split(i,arr,"#") ##Splitting i with arr delimiter as # here.
sub(/#.*/,"",i) ##Substituting from # to till last of line with NULL.
printf("%s ",i) ##Printing i here.
for(i=1;i<=ARGIND;i++){ ##Running a for loop till ARGIND value from i=1
val=(val?val OFS:"")((arr[1] "#" ARGV[i]) in a?a[oldi]:0) ##Creating val if arr[1] "#" ARGV[i] in a then have a value with index a[oldi] or put 0.
}
printf("%s\n",val) ##Printing val here with new line.
val="" ##Nullifying val here.
}
}
' dell.txt Hp.txt Apple.txt | sort -k1 | column -t ##Mentioning Input_file names, sorting output and then using column -t to look output well.
Output will be as follows.
Name Id Year dell Hp Apple
SS.01 80 2001 0 2k 0
SS.01 89 2000 6.0k 0 6.0k
SS.01 89 2000 60k 0 60k
kp.03 23 1996 0 0 530k
xx.01 45 1990 2k 0 0
xx.01 48 1994 0 21k 0
xx.02 45 1990 0 0 20k
Here is an awk script to join the files as required.
BEGIN { OFS = "\t"}
NR==1 { col[++c] = $1 OFS $2 OFS $3 }
FNR==1 {
split(FILENAME, arr, ".")
f = arr[1]
col[++c] = f
next
}
{
id[$1 OFS $2 OFS $3] = $4
cell[$1 OFS $2 OFS $3 OFS f] = $4
}
END {
for (i=1; i<=length(col); i++) {
printf col[i] OFS
}
printf ORS
for (i in id) {
printf i OFS
for (c=2; c<=length(col); c++) {
printf (cell[i OFS col[c]] ? cell[i OFS col[c]] : "0") OFS
}
printf ORS
}
}
Usage:
awk -f tst.awk *.txt | sort -nk3
Note that the glob fetches the files in alphabetical order and the arguments order determines the column order of the output. If you want a different column order, you have to order the arguments, for example like below.
Output is a real tab-columned file, but if you want a tab-like look with spaces, pipe to column -t
Testing
Using your sample files and providing their order:
> awk -f tst.awk dell.txt Hp.txt Apple.txt | sort -nk3 | column -t
Name Id Year dell Hp Apple
xx.01 45 1990 2k 0 0
xx.02 45 1990 0 0 20k
xx.01 48 1994 0 21k 0
kp.03 23 1996 0 0 530k
SS.01 89 2000 6.0k 0 60k
SS.01 80 2001 0 2k 0

separate fields based on first column content, match in second column and subtract in fourth column values in awk

My input file is like:
a10 otu1 xx 44
b24 otu2 xxx 52
x35 otu3 xy 11
x45 otu3 zz 22
z452 Otu5 rr 78
control1 otu1 w 4
control2 otu2 ee 30
control3 otu3 tt 20
control4 otu4 yy 10
First, I want to separate control from the others in column 1, and then match second column
values of control with other’s second column. Where match does find in second column, I want
to subtract the corresponding values in fourth column.
Output file would be:
a10 otu1 xx 40
b24 otu2 xxx 22
x35 otu3 xy -9
x45 otu4 zz 12
z452 Otu5 rr 78
Now, to match the second column and subtract values in fourth column, I use:
awk 'NR==FNR {a[$2]=$2 in a?a[$2]-$4:$4; next} !b[$2]++ {print $1,$2,$3,a[$2]}' inputfile.txt{,}
How can I feed separate field information (control vs others) in the script?
Could you please try following.
awk '
!/^control/{
a[++count1]=$NF
b[count1]=$1 OFS $2 OFS $3
next
}
{
c[++count2]=$NF
}
END{
for(i=1;i<=count1;i++){
print b[i],a[i]-c[i]
}
}
' Input_file
More generic solution: In case you don't want to hardcode field values in first array a and you have more than 4 fields in first file then try following.
awk '
!/^control/{
a[++count1]=$NF
$NF=""
sub(/ +$/,"")
b[count1]=$0
next
}
{
c[++count2]=$NF
}
END{
for(i=1;i<=count1;i++){
print b[i],a[i]-c[i]
}
}
' Input_file
$ cat tst.awk
NR==FNR {
if ( /^control/ ) {
control[$2] = $NF
}
next
}
!/^control/ {
$NF = $NF - control[$2]
print
}
$ awk -f tst.awk file file
a10 otu1 xx 40
b24 otu2 xxx 22
x35 otu3 xy -9
x45 otu3 zz 2
z452 Otu5 rr 78
Here's another take on this:
/^control/ {
a[$2]=a[$2]-$4
next
}
{
a[$2]=a[$2]+$4
b[$2]=$1 OFS $2 OFS $3
}
END {
for(i in b) print b[i] OFS a[i]
}
This subtracts any values on control lines, adds any values on other lines, storing them in the array a[]. It maintains an array of line content, b[].
By storing content in the array, it's possible for there to be multiple data or control lines affecting the value, and they can appear in any order in your input (since 44 - 40 is the same as -40 + 44).
Note that because our END for loop steps through the array, output is not guaranteed to be in the same order as input.

replacing associative array indexes with their value using awk or sed

I would like to replace column values of ref using key value pairs from id
cat id:
[1] a 8-23
[2] g 8-21
[3] d 8-13
cat ref:
a 1 2
b 3 4
c 5 3
d 1 2
e 3 1
f 1 2
g 2 3
desired output
8-23 1 2
b 3 4
c 5 3
8-13 1 2
e 3 1
f 1 2
8-21 2 3
I assume it would be best done using awk.
cat replace.awk
BEGIN { OFS="t" }
NR==FNR {
a[$2]=$3; next
}
$1 in !{!a[#]} {
print $0
}
Not sure what I need to change?
$1 in !{!a[#]} is not awk syntax. You just need $1 in a:
BEGIN { OFS='\t' }
NR==FNR {
a[$2] = $3
next
}
{
$1 = ($1 in a) ? a[$1] : $1
print
}
to force OFS to update, this version always assigns to $1
print uses $0 if unspecified

Merge two rows with condition AWK

I have question. I would like to merge two or three rows with condition into one row with specific printing.
INPUT: File has 6 row and tab delimited
LOL h/h 2 a b c
LOLA h/h 3 b b b
SERP w/w 4 c c c
DARD s/s 5 d d d
GIT w/w 6 a b c
GIT h/h 6 a a b
GIT d/d 6 a b b
LOL h/h 7 a a a
Output: there are 2 conditions: if ($1s are the same and $3s are the same) merge rows together with specific printing
LOL h/h 2 a b c
LOLA h/h 3 b b b
SERP w/w 4 c c c
DARD s/s 5 d d d
GIT w/w 6 a b c h/h 6 a a b d/d 6 a b b
LOL h/h 7 a a a
I have this code:
awk -F'\t' -v OFS="\t" 'NF>1{a[$1] = a[$1]"\t"$2"\t"$3"\t"$4"\t"$5"\t"$6};END{for(i in a){print i""a[i]}}'
But it is merging by 1st column only and I am not sure if it is good to use this code.
In awk:
$ awk '($1 FS $3) in a{k=$1 FS $3; $1=""; a[k]=a[k] $0;next} {a[$1 FS $3]=$0} END {for(i in a) print a[i]}' file
SERP w/w 4 c c c
LOL h/h 2 a b c
LOLA h/h 3 b b b
DARD s/s 5 d d d
LOL h/h 7 a a a
GIT w/w 6 a b c h/h 6 a a b d/d 6 a b b
Explained:
($1 FS $3) in a { # if keys already seen in array a
k=$1 FS $3
$1="" # remove $1
a[k]=a[k] $0 # append to existing
next
}
{ a[$1 FS $3]=$0 } # if keys not seen, see them
END {
for(i in a) # for all stored keys
print a[i] # print
}
Here is answer for gawk v4 which supports multi-dimensional array. One columns from first file are stored in a multi dimensional array, things are easy to compare with second file column. My solution show an example printf which you can modify as per your needs.
#!/bin/gawk -f
NR==FNR { # for first file
a[$1][0] = $2; # Store columns in
a[$1][1] = $3; # multi dimensional
a[$1][2] = $4; # array
a[$1][3] = $5;
a[$1][4] = $6;
next;
}
$1 in a && $3 == a[$1][1] {
printf("%s\t%s\n", $2, a[$1,0])
}
Answer using gawk v3 where I cannot use multi-dimensional array
#!/bin/gawk -f
NR==FNR {
a[$1]
b[$1] = $2;
c[$1] = $3;
d[$1] = $4;
e[$1] = $5;
f[$1] = $6;
next;
}
$1 in a && $3 == c[$1] {
print $0
}
One-liner
gawk 'NR==FNR {a[$1]; b[$1] = $2; c[$1] = $3; d[$1] = $4; e[$1] = $5; f[$1] = $6; next; } $1 in a && $3 == c[$1] { print $0 }' /tmp/file1 /tmp/file2