Comparing two columns in two files using awk with duplicates - awk

File 1
A4gnt 0 0 0 0.3343
Aaas 2.79 2.54 1.098 0.1456
Aacs 0.94 0.88 1.063 0.6997
Aadac 0 0 0 0.3343
Aadacl2 0 0 0 0.3343
Aadat 0.01 0 1.723 0.7222
Aadat 0.06 0.03 1.585 0.2233
Aaed1 0.28 0.24 1.14 0.5337
Aaed1 1.24 1.27 0.976 0.9271
Aaed1 15.91 13.54 1.175 0.163
Aagab 1.46 1.14 1.285 0.3751
Aagab 6.12 6.3 0.972 0.6569
Aak1 0.02 0.01 1.716 0.528
Aak1 0.1 0.19 0.561 0.159
Aak1 0.14 0.19 0.756 0.5297
Aak1 0.16 0.18 0.907 0.6726
Aak1 0.21 0 0 0.066
Aak1 0.26 0.27 0.967 0.9657
Aak1 0.54 1.65 0.325 0.001
Aamdc 0.04 0 15.461 0.0875
Aamdc 1.03 1.01 1.019 0.8817
Aamdc 1.27 1.26 1.01 0.9285
Aamdc 7.21 6.94 1.039 0.7611
Aamp 0.06 0.05 1.056 0.9136
Aamp 0.11 0.11 1.044 0.9227
Aamp 0.12 0.13 0.875 0.7584
Aamp 0.22 0.2 1.072 0.7609
File 2
Adar
Ak3
Alox15b
Ampd2
Ampd3
Ankrd17
Apaf1
Aplp1
Arih1
Atg14
Aurkb
Bcl2l14
Bmp2
Brms1l
Cactin
Camta2
Cav1
Ccr5
Chfr
Clock
Cnot1
Crebrf
Crtc3
Csnk2b
Cul3
Cx3cl1
Dnaja3
Dnmt1
Dtl
Ednra
Eef1e1
Esr1
Ezr
Fam162a
Fas
Fbxo30
Fgr
Flcn
Foxp3
Frzb
Fzd6
Gdf3
Hey2
Hnf4
The desired output would be wherever matches in the first column from both file print out all the columns in the first file (including duplicates).
I've tried
awk 'NR==FNR{a[$1]=$2"\t"$3"\t"$4"\t"$5; next} { if($1 in a) { print $0,a[$1] } }' File2 File1 > output
But for some reason I'm getting just few hits. Does anyone know why?

Read second file first, and store 1st column values in array arr as array keys, and then read first file, if 1st column of file1 exists in array arr which was created using file2, then print current row/record from file1.
awk 'FNR==NR{arr[$1];next}$1 in arr' file2 file1
Advantage:
if you use a[$1]=$2"\t"$3"\t"$4"\t"$5; next, if there's any data with same key will be replaced with previous value,
but if you use arr[$1];next, we store just unique key, and $1 in arr takes care of duplicate record even if it exists

Related

Searching File2 with 3 columns from File 1 with awk

Does anyone know how to print "Not Found" if there is no match, such that the print output will always contain the same number of lines as File 1?
To be more specific, I have two files with four columns:
File 1:
1 800 800 0.51
2 801 801 0.01
3 802 802 0.01
4 803 803 0.23
File 2:
1 800 800 0.55
2 801 801 0.09
3 802 802 0.88
4 803 804 0.24
This is what I am using now:
$ awk 'NR==FNR{a[$1,$2,$3];next}($1,$2,$3) in a{print $4}' file1.txt file2.txt
This generates the following output:
0.55
0.09
0.88
However, I want to get this:
0.55
0.09
0.88
Not Found
Could you please help?
Sorry if this is presented in a confusing manner; I have little experience with awk and am confused myself.
In a separate issue, I want to end up having a file that has the data from File 2 added on to File1, like so:
1 800 800 0.51 0.55
2 801 801 0.01 0.09
3 802 802 0.01 0.88
4 803 803 0.23 Not Found
I was going to generate the file as before (lets call it file2-matches.txt), then use the paste command:
paste -d"\t" file1.txt file2-matches.txt > output.txt
But considering I have to do this matching for over 100 files, is there any more efficient way to do this that you can suggest?
Add an else clause:
$ awk 'NR==FNR{a[$1,$2,$3];next} {if (($1,$2,$3) in a) {print $4} else {print "not found"}}' f1 f2
0.55
0.09
0.88
not found

obtain averages of field 2 after grouping by field 1 with awk

I have a file with two fields containing numbers that I have sorted numerically based on field 1. The numbers in field 1 range from 1 to 200000 and the numbers in field 2 between 0 and 1. I want to obtain averages for both field 1 and field 2 in batches (based on rows).
Here is example input output when specifying batches of 4 rows:
1 0.12
1 0.34
2 0.45
2 0.40
50 0.60
301 0.12
899 0.13
1003 0.14
1300 0.56
1699 0.43
2100 0.25
2500 0.56
The output would be:
1.5 0.327
563.25 0.247
1899.75 0.45
Here you go:
awk -v n=4 '{s1 += $1; s2 += $2; if (++i % n == 0) { print s1/n, s2/n; s1=s2=0; } }'
Explanation:
Initialize n=4, the size of the batches
Collect the sums: sum of 1st column in s1, the 2nd in s2
Increment counter i by 1 (default initial value is 0, no need to set it)
If i is divisible by n with no remainder, then we print the averages, and reset the sum variables

restrict pattern to specified strings

I have a set of strings. Lets say, (list.txt) they are:
1abc_A
2pqr_X
4ghi_Z
I also have a text file (test.txt), which looks like this:
1abc_A 2pqr_X 0.55 0.87
2pqr_X 3def_Y 0.21 0.24
4ghi_Z 1abc_A 0.98 0.75
2pqr_X 4ghi_Z 0.99 0.76
2pqr_X 2pqr_X 1.00 1.00
I need to get only those lines from test.txt, such that strings in columns 1 and 2, belong to the strings included in list.txt
In this case, my output would be as follows:
1abc_A 2pqr_X 0.55 0.87
4ghi_Z 1abc_A 0.98 0.75
2pqr_X 4ghi_Z 0.99 0.76
2pqr_X 2pqr_X 1.00 1.00
i.e all the lines in test.txt EXCEPT the 2nd line, since column 2 in 2nd line, 3def_Y is not among the list of strings specified in list.txt
How can I do this in awk?
Please note that test.txt is a large text file, of almost 7GB.
What is the fastest way to go about this problem ?
Please help .
awk 'NR==FNR{a[$0];next} ($1 in a) && ($2 in a)' list.txt test.txt
Stores the contents of list.txt as indices of an array, and then line by line of test.txt checks that it's 1st and 2nd fields are both indices of that array. Will work for any size of test.txt as it doesn't store any of test.txt in memory.

Search multiple columns for values below a threshold using awk or other bash script

I would like to extract lines of a file where specific columns have a value <0.05.
For example if $2 or $4 or $6 has a value <0.05 then I want to send that line to a new file.
I do not want any lines that have a value >0.05 in any of these columns
cat File_1.txt
S_003 P_003 S_006 P_006 S_008 P_008
74.9 0.006 59.6 0.061 72.2 0.002
96.2 0.003 89.4 0.001 106.9 0.000
105.8 0.003 72.6 0.003 86.7 0.002
45.8 0.726 38.5 0.981 43.9 0.800
50.7 0.305 47.8 0.314 46.6 0.615
49.9 0.366 50.4 0.165 48.2 0.392
42.5 0.920 43.7 0.698 40.3 0.970
46.3 0.684 42.9 0.760 47.7 0.438
192.4 0.001 312.8 0.001 274.3 0.001
I tried this using awk, but it would only work doing it a very long way.
awk ' $2<=0.05' file_1.txt > file_2.txt
awk ' $4<=0.05' file_2.txt > file_3.txt
etc, and achieved the desired result
96.2 0.003 89.4 0.001 106.9 0.000
105.8 0.003 72.6 0.003 86.7 0.002
192.4 0.001 312.8 0.001 274.3 0.001
but my file is 198 columns and 57000 lines
I also tried piping the awk commands together with no luck. it only searches $2
awk ' $2<=0.05 || $4=<0.05' File_1.txt > File_2.txt
74.9 0.006 59.6 0.051 72.2 0.002
96.2 0.003 89.4 0.001 106.9 0.000
105.8 0.003 72.6 0.003 86.7 0.002
192.4 0.001 312.8 0.001 274.3 0.001
I'm pretty new at this and would appreciate any advice on how to achieve this using awk
Thanks
Sam
Perhaps this is what you're looking for. It will search every even numbered column and check that each of these columns contain numbers smaller than '0.05':
awk 'NF>1 { for(i=2;i<=NF;i+=2) if ($i>0.05) next }1' File_1.txt
Results:
96.2 0.003 89.4 0.001 106.9 0.000
105.8 0.003 72.6 0.003 86.7 0.002
192.4 0.001 312.8 0.001 274.3 0.001

SQL linear interpolation based on lookup table

I need to build linear interpolation into an SQL query, using a joined table containing lookup values (more like lookup thresholds, in fact). As I am relatively new to SQL scripting, I have searched for an example code to point me in the right direction, but most of the SQL scripts I came across were for interpolating between dates and timestamps and I couldn't relate these to my situation.
Basically, I have a main data table with many rows of decimal values in a single column, for example:
Main_Value
0.33
0.12
0.56
0.42
0.1
Now, I need to yield interpolated data points for each of the rows above, based on a joined lookup table with 6 rows, containing non-linear threshold values and the associated linear normalized values:
Threshold_Level Normalized_Value
0 0
0.15 20
0.45 40
0.60 60
0.85 80
1 100
So for example, if the value in the Main_Value column is 0.45, the query will lookup its position in (or between) the nearest Threshold_Level, and interpolate this based on the adjacent value in the Normalized_Value column (which would yield a value of 40 in this example).
I really would be grateful for any insight into building a SQL query around this, especially as it has been hard to track down any SQL examples of linear interpolation using a joined table.
It has been pointed out that I could use some sort of rounding, so I have included a more detailed table below. I would like the SQL query to lookup each Main_Value (from the first table above) where it falls between the Threshold_Min and Threshold_Max values in the table below, and return the 'Normalized_%' value:
Threshold_Min Threshold_Max Normalized_%
0.00 0.15 0
0.15 0.18 5
0.18 0.22 10
0.22 0.25 15
0.25 0.28 20
0.28 0.32 25
0.32 0.35 30
0.35 0.38 35
0.38 0.42 40
0.42 0.45 45
0.45 0.60 50
0.60 0.63 55
0.63 0.66 60
0.66 0.68 65
0.68 0.71 70
0.71 0.74 75
0.74 0.77 80
0.77 0.79 85
0.79 0.82 90
0.82 0.85 95
0.85 1.00 100
For example, if the value from the Main_Value table is 0.52, it falls between Threshold_Min 0.45 and Threshold_Max 0.60, so the Normalized_% returned is 50%. The problem is that the Threshold_Min and Max values are not linear. Could anyone point me in the direction of how to script this?
Assuming you want the Main_Value and the nearest (low and not high) or equal Normalized_Value, you can do it like this:
select t1.Main_Value, max(t2.Normalized_Value) as Normalized_Value
from #t1 t1
inner join #t2 t2 on t1.Main_Value >= t2.Threshold_Level
group by t1.Main_Value
Replace #t1 and #t2 by the correct tablenames.