awk | Add new row or update existing row in a file - awk
I want to update file1 on the basis of file2. If any row is new in file2 then it should be added in file1. If any row from file2 is already in file1, then update that row with the row from file2 if the time is greater in file2.
file1
DL,1111111100,201312051013,val,FIX01,OptIn,N,Ext1,Ext2
DL,1111111101,201312051014,val,FIX01,OptIn,Y,Ext1,Ext2
DL,1111111102,201312051015,val,FIX01,OptIn,Y,Ext1,Ext2
DL,1111111103,201312051016,val,FIX01,OptIn,N,Ext1,Ext2
file2
DL,1111111101,201312041013,val,FIX02,OptIn,N,Ext1,Ext2
DL,1111111102,201312051016,val,FIX02,OptIn,N,Ext1,Ext2
DL,1111111102,201312051017,val,FIX02,OptIn,N,Ext1,Ext2
DL,1111111104,201312051014,val,FIX01,OptIn,Y,Ext1,Ext2
DL,1111111104,201312051016,val,FIX02,OptIn,Y,Ext1,Ext2
newfile1
DL,1111111100,201312051013,val,FIX01,OptIn,N,Ext1,Ext2
DL,1111111101,201312051014,val,FIX01,OptIn,Y,Ext1,Ext2
DL,1111111102,201312051017,val,FIX02,OptIn,N,Ext1,Ext2
DL,1111111103,201312051016,val,FIX01,OptIn,N,Ext1,Ext2
DL,1111111104,201312051016,val,FIX02,OptIn,Y,Ext1,Ext2
Notes:
2nd field should be unique in the output.
Addition of new value: the latest 2nd field for value "1111111104" in file2 is taken which is newer (201312051016) then old value (201312051014) on the basis of date column (3rd field).
Update an existing value: updated "1111111102" with newer value on the basis of date in 3rd column
file1 is very LARGE whereas file2 has 5-10 entries only.
row with 2nd field "1111111101" doesn't need to b updated because it's entry in file1 already has the latest date "201312051014" as compared to new date "201312041013" in file2.
I haven't tried much on this because it really has complex condition for me as beginner..
BEGIN { FS = OFS = "," }
FNR == NR {
m=$2;
a[m] = $0;
next
}
{
if($2 in a)
{
split(a[$2],datetime,",")
if($3>datetime[3])
print $0;
else
print a[$2]"Old time"
}
else print $0"NOMATCH";
delete a[$2];
}
Assuming that you can start your awk as follows:
awk -f script.awk input2.csv input1.csv > result.csv
you can use the following script to obtain the desired output:
BEGIN {
FS = OFS = ","
}
FILENAME == "input2.csv" {
date[$2] = $3
data[$2] = $0
used[$2] = 0
}
FILENAME == "input1.csv" {
if ($2 in date) {
used[$2] = 1
if ($3 < date[$2])
print data[$2]
else
print $0
} else {
print $0
}
}
END {
for (key in used) {
if (used[key] == 0)
print data[key]
}
}
Notes:
The script takes advantages of the assumption that file2 is smaller than file1 because it uses an array only for the few entries in file2.
The new entries are simply appended to the output. There is no sorting. If this is required there will have to be an extra effort.
EDIT
Heeding #JonathanLeffler's remark about the way I determine which file is being processed I would like to offer an alternate version that may (or may not :-) ) be a little more straight forward to understand than checking NR=FNR. However, it only works for sufficiently recent versions of awk which are capable of returning the size of an array as length(array):
BEGIN {
FS = ","
}
{
# The following effectively creates an array entry for each filename found (for "known" filenames existing entries are overwritten).
files[FILENAME] = 1
# check the number of files we have so far
if (length(files) == 1) {
# we are still in the first file
date[$2] = $3
data[$2] = $0
used[$2] = 0
} else {
# we are in the second file (or any other following file)
if ($2 in date) {
used[$2] = 1
if ($3 < date[$2])
print data[$2]
else
print $0
} else {
print $0
}
}
}
END {
for (key in used) {
if (used[key] == 0)
print data[key]
}
}
Also, if you require your output to be sorted according to the second row you can replace the call to awk by this:
awk -f script.awk input2.csv input1.csv | sort -t "," -n -k 2 > result.csv
The latter, of course, works for both versions of the script.
Since file1 is very large but file2 is very small (5-10 entries), you need to read all of file2 into memory first, dealing with the duplicate values. As a result, you'll have an array indexed by the record number with the new data; you should also have a record of the date for each record in a separate array. Then, as you read the main file, you look up the the record number and the date in the arrays, and if you need to, substitute the saved new record for the incoming old record.
Your outline script is most of the way there. It is more complex because you didn't save the dates coming in. This more or less works:
awk -F, '
FNR == NR { if (!($2 in date) || date[$2] < $3) { date[$2] = $3; line[$2] = $0; } next; }
{ if ($2 in date)
{
if (date[$2] > $3)
print line[$2]
else
print
delete line[$2]
delete date[$2]
}
else
print
}
END { for (l in line) print line[l]; }' file2 file1
Sample output for given data:
DL,1111111100,201312051013,val,FIX01,OptIn,N,Ext1,Ext2
DL,1111111101,201312051014,val,FIX01,OptIn,Y,Ext1,Ext2
DL,1111111102,201312051017,val,FIX02,OptIn,N,Ext1,Ext2
DL,1111111103,201312051016,val,FIX01,OptIn,N,Ext1,Ext2
DL,1111111104,201312051016,val,FIX02,OptIn,Y,Ext1,Ext2
However, if there were 4 new records, there's no guarantee that they'd be in sorted order, though they would all be at the end of the list. It would be possible to upgrade the script to print the new records at the appropriate place in the list if the input is guaranteed to be in sorted order. You simply have to search through the list of lines to see whether there are any lines that should be printed before the current line, and if so, do so (and delete the record so that they are not printed at the end).
Note that uniqueness in the output depends on uniqueness in the input (file1). That is, if field 2 in the input is repeated, this code won't notice. There is also nothing that can be done with the current design even if a duplicate was spotted; the old row has been printed so printing the new row will simply cause the duplicate. If you were worried about this, you could design the awk script to keep the whole of file1 in memory and only print anything when the whole of the input has been processed. Needless to say, this uses a lot more memory than the current design, and will generally be less efficient because of that. Nevertheless, it could be done if needed.
Related
How to find the maximum value for the field by ignoring the lines with characters using awk?
Since am newbie to the awk , please help me with your suggestions. I tried the below command to filter the maximum value and ignore the first & last lines from the sample text file separately. They work when I try them separately. My query: I need to ignore the last line and first few lines and from the file and then need to take the maximum value for the field 7 using awk . I also need to ignore the lines with the characters . Can anyone suggest me the possibilities two use both the commands together and get the required output. Sample file: Linux 3.10.0-957.5.1.el7.x86_64 (j051s784) 11/24/2020 _x86_64_ (8 CPU) 12:00:02 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 12:10:01 AM 4430568 61359128 93.27 1271144 27094976 66771548 33.04 39005492 16343196 1348 12:20:01 AM 4423380 61366316 93.28 1271416 27102292 66769396 33.04 39012312 16344668 1152 12:30:04 AM 4406324 61383372 93.30 1271700 27108332 66821724 33.06 39028320 16343668 2084 12:40:01 AM 4404100 61385596 93.31 1271940 27107724 66799412 33.05 39031244 16344532 1044 06:30:04 PM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 07:20:01 PM 3754904 62034792 94.29 1306112 27555948 66658632 32.98 39532204 16476848 2156 Average: 4013043 61776653 93.90 1293268 27368986 66755606 33.03 39329729 16427160 2005 Commands used: cat testfile | awk '{print $7}' | head -n -1 | tail -n+7 awk 'BEGIN{a= 0}{if ($7>0+a) a=$7} END{print a}' testfile Expected output: Maximum value for the column 7 by excluding the lines wherever alphabet character is available
1st solution(Generic solution): Adding one Generic solution here, where sending field name to an awk variable(which we want to look for for maximum value) it will automatically find out its field number from very first line and will work accordingly. Considering that your first line has that field name which you want to look for. awk -v var="kbcached" ' FNR==1{ for(i=1;i<=NF;i++){ if($i==var){ field=i } } next } /kbmemused/{ next } { if($2!~/^[AP]M$/){ val=$(field-1) } else{ val=$field } } { max=(max>val?max:val) val="" } END{ print "Maximum value is:" max } ' Input_file 2nd solution(As per shown samples only): Could you please try following, based on your shown samples only. I am assuming you want the field value of column kbcached. awk ' /kbmemfree/{ next } { if($2!~/^[AP]M$/){ val=$6 } else{ val=$7 } } { max=(max>val?max:val) val="" } END{ print "Maximum value is:" max } ' Input_file
awk '$7 ~ ^[[:digit:]]+$/ && $1 != "Average:" { max[$7]="" } END { PROCINFO["sorted_in"]="#ind_num_asc"; for (i in max) { maxtot=i } print maxtot }' file One liner: awk '$7 ~ /^[[:digit:]]+$/ && $1 != "Average:" { max[$7]="" } END { PROCINFO["sorted_in"]="#ind_num_asc";for (i in max) { maxtot=i } print maxtot }' file Using GNU awk, search for lines where field 7 is only numbers and field one is not "Average:" In these instances, create an array entry with field 7 as the index. At the end, sort the array in index ascending number order. Loop through the array setting a maxtot variable. The last entry in the max array will be the highest kbcached and so print maxtot
bash or awk - generating report from complex data set
I have a program that generates a large data file, and I put a small sample in the input section. What I am trying to do is start with an AOUT. Then look at the 4th column to find its next connection, which shows up in the second column somewhere else in the file and repeat those steps until it ends with an AIN in the first column. The number of connections between the AOUT and AIN varies from just one to over ten. If there isn't an AIN at the end, there shouldn't be any output. the output should start with AOUT and show each connection until it reaches AIN. Is there a way to use awk or anything to create my desired output? input (this is a small section there are many more and the order they appear is not standard) AOUT,03xx:LY0372A,LIC0372.OUT,LIC0372 PIDA,03xx:LIC0372,LT372_SEL.OUT,LT372_SEL SIGSEL,03xx:LT372_SEL,LT1_0372.PNT,LT1_0372 AIN,03xx:LT1_0372 output: 03xx:LY0372A =03xx:LT372_SEL.OUT =03xx:LT1_0372.PNT =03xx:LT1_0372 output format: (AOUT) =(any number of jumps) =(any number of jumps)) =(AIN)
If you don't provide more input and answers to the questions in the comments above, a possible solution in AWK could be: #!/bin/bash awk -F',' '{ if ($1 == "AOUT") { output = $2 "\n" connector = $4 sub (":.*", "", $2) label = $2 } else if ($1 == "AIN") { output = output " =" $2 print output output = "" } else if (output != "") { if ($2 == label ":" connector) { output = output " =" label ":" $3 "\n" connector = $4 } } }' input.csv
How to find records with minimum value in a specific column in multiple files?
I have 2 column 4000 dat files. from each file, I need to identify first mimum value of column 2 and print corresponding row. Then this should run on multiple files in the folder and append these values to a new file. I have tried below code. File names include common string: fig_3-28333.dat ^^^^^ file number awk'BEGIN{min=0}{if(($2)>min) min=($2)}END {print line}' cat >> new.dat output file expected to be file number Column 1 column2 28333 x value first minimum value 28334 x value first minimum value
NOTE: This only works with gawk (which understands the ENDFILE pattern), and not the regular awk Here is my script, min.awk: BEGIN { print "file number Column 1 column2" } FNR == 1 { min = $2; first = $1 second = $2 } $2 < min { min = $2 first = $1 second = $2 } ENDFILE { # Extract the file number to a[1] match(FILENAME, /.*-([0-9]+)\.dat/, a) print a[1], first, second } Notes The BEGIN pattern prints the heading At the first line of each file (pattern: FNR == 1), establish the minimum value For those lines whose second value is less than the minimum (pattern: $2 < min), establish the new minimum value At the end of each file, print out the minimum value for that file Invoke the script gawk -f min.awk *.dat Update After reviewing my script, I duplicated code which I can eliminate by combining the two blocks: BEGIN { print "file number Column 1 column2" } FNR == 1 || $2 < min{ min = $2; first = $1 second = $2 } ENDFILE { # Extract the file number to a[1] match(FILENAME, /.*-([0-9]+)\.dat/, a) print a[1], first, second }
Analysing two files using awk with if condition
I have two files. First contains names, numbers and days for all samples sam_name.csv Number,Day,Sample 171386,0,38_171386_D0_2-1.raw 171386,0,38_171386_D0_2-2.raw 171386,2,30_171386_D2_1-1.raw 171386,2,30_171386_D2_1-2.raw 171386,-1,40_171386_D-1_1-1.raw 171386,-1,40_171386_D-1_1-2.raw The second includes information about batches (last column) sam_batch.csv Number,Day,Quar,Code,M.F,Status,Batch 171386,0,1,x,F,C,1 171386,1,1,x,F,C,2 171386,2,1,x,F,C,5 171386,-1,1,x,F,C,6 I would like to get the information about batches (using two condition number and day) and add it to the first file. I have used awk command to do that, but I am getting results only at one-time point (-1). Here is my command: awk -F"," 'NR==FNR{number[$1]=$1;day[$1]=$2;batch[$1]=$7; next}{if($1==number[$1] && $2==day[$1]){print $0 "," number[$1] "," day[$1] "," batch[$1]}}' sam_batch.csv sam_nam.csv Here are my results: (a file sam_name, number and day from file sam_batch (just to check if a condition is working) and batch number (a value which I need) Number,Day,Sample,Number,Day, Batch 171386,-1,40_171386_D-1_1-1.raw,171386,-1,6 171386,-1,40_171386_D-1_1-2.raw,171386,-1,6 175618,-1,08_175618_D-1_1-1.raw,175618,-1,2
Here I corrected your AWK code: awk -F"," 'NR==FNR{ number_day = $1 FS $2 batch[number_day]=$7 next } { number_day = $1 FS $2 print $0 "," batch[number_day] }' sam_batch.csv sam_name.csv Output: Number,Day,Sample,Batch 171386,0,38_171386_D0_2-1.raw,1 171386,0,38_171386_D0_2-2.raw,1 171386,2,30_171386_D2_1-1.raw,5 171386,2,30_171386_D2_1-2.raw,5 171386,-1,40_171386_D-1_1-1.raw,6 171386,-1,40_171386_D-1_1-2.raw,6 (No need for double-checking if you understand how the script works.) Here's another AWK solution (my original answer): awk -v "b=sam_batch.csv" 'BEGIN { FS=OFS="," while(( getline line < b) > 0) { n = split(line,a) nd = a[1] FS a[2] nd2b[nd] = a[n] } } { print $1,$2,$3,nd2b[$1 FS $2] }' sam_name.csv Both solutions parse file sam_batch.csv at the beginning to form a dictionary of (number, day) -> batch. Then they parse sam_name.csv, printing out the first three fields together with the "Batch" from another file.
Awk merge the results of processing two files into a single file
I use awk to extract and calculate information from two different files and I want to merge the results into a single file in columns ( for example, the output of first file in columns 1 and 2 and the output of the second one in 3 and 4 ). The input files contain: file1 SRR513804.1218581HWI-ST695_116193610:4:1307:17513:49120 SRR513804.16872HWI ST695_116193610:4:1101:7150:72196 SRR513804.2106179HWI- ST695_116193610:4:2206:10596:165949 SRR513804.1710546HWI-ST695_116193610:4:2107:13906:128004 SRR513804.544253 file2 >SRR513804.1218581HWI-ST695_116193610:4:1307:17513:49120 TTTTGTTTTTTCTATATTTGAAAAAGAAATATGAAAACTTCATTTATATTTTCCACAAAG AATGATTCAGCATCCTTCAAAGAAATTCAATATGTATAAAACGGTAATTCTAAATTTTAT ACATATTGAATTTCTTTGAAGGATGCTGAATCATTCTTTGTGGAAAATATAAATGAAGTT TTCATATTTCTTTTTCAAAT To parse the first file I do this: awk ' { s = NF center = $1 } { printf "%s\t %d\n", center, s } ' file1 To parse the second file I do this: awk ' /^>/ { if (count != "") printf "%s\t %d\n", seq_id, count count = 0 seq_id = $0 next } NF { long = length($0) count = count+long } END{ if (count != "") printf "%s\t %d\n", seq_id, count } ' file2 My provisional solution is create one temporal and overwrite in the second step. There is a more "elegant" way to get this output?
I am not fully clear on the requirement and if you can update the question may be we can help improvise the answer. However, from what I have gathered is that you would like to summarize the output from both files. I have made an assumption that content in both files are in sequential order. If that is not the case, then we will have to add additional checks while printing the summary. Content of script.awk (re-using most of your existing code): NR==FNR { s[NR] = NF center[NR] = $1 next } /^>/ { seq_id[++y] = $0 ++i next } NF { long[i] += length($0) } END { for(x=1;x<=length(s);x++) { printf "%s\t %d\t %d\n", center[x], s[x], long[x] } } Test: $ cat file1 SRR513804.1218581HWI-ST695_116193610:4:1307:17513:49120 SRR513804.16872HWI ST695_116193610:4:1101:7150:72196 SRR513804.2106179HWI- ST695_116193610:4:2206:10596:165949 SRR513804.1710546HWI-ST695_116193610:4:2107:13906:128004 SRR513804.544253 $ cat file2 >SRR513804.1218581HWI-ST695_116193610:4:1307:17513:49120 TTTTGTTTTTTCTATATTTGAAAAAGAAATATGAAAACTTCATTTATATTTTCCACAAAG AATGATTCAGCATCCTTCAAAGAAATTCAATATGTATAAAACGGTAATTCTAAATTTTAT ACATATTGAATTTCTTTGAAGGATGCTGAATCATTCTTTGTGGAAAATATAAATGAAGTT TTCATATTTCTTTTTCAAAT $ awk -f script.awk file1 file2 SRR513804.1218581HWI-ST695_116193610:4:1307:17513:49120 4 200 ST695_116193610:4:2206:10596:165949 3 0