print matched column from multiple inputs and concatenate the output - awk

print matched column from multiple inputs and concatenate the output
A.lst
1091 1991 43.5 -30.1 -11.4
1091 1993 -11.2 -28.5 -2.7
1091 1997 35.8 -13.2 -4.5
1091 2003 -26.8 -23.9 0.6
1091 2007 23.8 64.8 3.5
1091 2008 -45.8 70.7 -6.0
1100 1967 24.5 -25.6 -12.7
1100 1971 -935.0 9.3 52.0
1100 1972 -388.8 59.1 20.4
1100 1974 17.7 48.9 3.0
B.lst
1091 1991 295 1
1091 1993 293 3
1091 1997 296 7
1091 2003 287 13
1091 2007 283 17
1091 2008 282 18
1100 1967 1419 3
1100 1971 56 7
1031 2023 283 17
1021 2238 282 18
1140 1327 1419 3
1150 3971 56 7
1100 1972 55 8
1100 1974 1445 10
By using this command (from previous, cant remember from where, i'll credit to them once i found it),
NR==FNR{
a[$1,$2]=$1; next
}
{
s=SUBSEP
k=$3 s $4
}k in a{ print $0 }
But I've no idea how to combine the output
it should print only matched (certain column $3 $4 from B.lst)
1091 1991 43.5 -30.1 -11.4 295 1
1091 1993 -11.2 -28.5 -2.7 293 3
1091 1997 35.8 -13.2 -4.5 296 7
1091 2003 -26.8 -23.9 0.6 287 13
1091 2007 23.8 64.8 3.5 283 17
1091 2008 -45.8 70.7 -6.0 282 18
1100 1967 24.5 -25.6 -12.7 1419 3
1100 1971 -935.0 9.3 52.0 56 7
1100 1972 -388.8 59.1 20.4 55 8
1100 1974 17.7 48.9 3.0 1445 10

Could you please try following.
awk 'FNR==NR{array[$1,$2]=$3 OFS $4;next} (($1,$2) in array){print $0,array[$1,$2]}' file_B file_A
Adding a non-one liner form of above solution now.
awk '
FNR==NR{
array[$1,$2]=$3 OFS $4
next
}
(($1,$2) in array){
print $0,array[$1,$2]
}
' file_B file_A
Explanation: Adding explanation for above code.
awk ' ##Starting awk program here.
FNR==NR{ ##Checking condition FNR==NR which will be TRUE when file_B is being read.
array[$1,$2]=$3 OFS $4 ##Creating an array named array whose index is $1,$2 and value is $3 OFS $4.
next ##Using next will skip all further statements from here.
} ##Closing FNR==NR condition BLOCK here.
(($1,$2) in array){ ##Checking condition if $1,$2 is present in array then do following.
print $0,array[$1,$2] ##Printing current line and then value of array with index of $1,$2
}
' file_B file_A ##Mentioning Input_file names here.

Related

cat specific columns in 3 files

I have 3 files such as :
file1_file:
scaffold_159 625 YP_009345712 0.284 447 289 9 96675 95377 196 625 6.963E-38 158
scaffold_159 625 YP_009345714 0.284 447 289 9 96675 95377 196 625 6.963E-38 158
IDBA_scaffold_24562 625 YP_009345713 0.464 56 20 2 2549 2686 10 65 7.513E-03 37
file2_file:
scaffold_159 625 YP_009345717 0.284 447 289 9 96675 95377 196 625 6.963E-38 158
scaffold_159 625 YP_009345718 0.284 447 289 9 96675 95377 196 625 6.963E-38 158
IDBA_scaffold_24562 625 YP_009345719 0.464 56 20 2 2549 2686 10 65 7.513E-03 37
file3_file:
scaffold_159 625 YP_009345711 0.284 447 289 9 96675 95377 196 625 6.963E-38 158
scaffold_159 625 YP_009345723 0.284 447 289 9 96675 95377 196 625 6.963E-38 158
IDBA_scaffold_24562 625 YP_009345721 0.464 56 20 2 2549 2686 10 65 7.513E-03 37
and I would like to only get the 3th part of the 3 file in a single new_file.txt.
Here I should get :
YP_009345712
YP_009345714
YP_009345713
YP_009345717
YP_009345718
YP_009345711
YP_009345723
YP_009345721
From now I tried:
cat file_names.txt | while read line; do cat /path1/${line}/path2/${line}_file > new_file.txt; done
in file_names.txt I have :
file1
file2
file3
but I do not know how to extract only the 3th column...
Ps: the 3 files are not in the same directory :
/path1/file1/path2/file1_file
/path1/file2/path2/file2_file
/path1/file3/path2/file3_file
EDIT: After chatting with OP came to know that his/her files could be on different locations, so in that case could you please try following. Considering that you have an Input_file where all file names are there. I am yet to test it.
file_name=$(awk '{val=(val?val OFS:"")"/path1/" $0 "/path2/" $0} END{print val}' file_names.txt)
awk '{print $3}' "$file_name"
OR
awk '{print $3}' $(awk '{val=(val?val OFS:"")"/path1/" $0 "/path2/" $0} END{print val}' file_names.txt)
You could use awk here.
awk '{print $3}' /complete_path/file1 /complete_path/file2 /complete_path/file3
I think it can be simpler with
$ sed 's_.*_"path1/&/path2/&"_' filenames | xargs awk '{print $3}'
awk will be called only once.
So you have a file fnames.txt with hundreds of strings:
str1
str2
str3
str4
...
and each string represents a file located in
/path1/${str}/path2/${str}_file
where ${str} is a value from file fnames.txt.
Now you want to read the third column, from the third file only:
$ str="$(awk '(NR==3){print; exit}' fnames.txt)"
$ file="/path1/${str}/path2/${str}_file"
$ awk '{print $3}' "$file" > new_file.txt
Always remember the KISS principle

Problems making an awk between range of columns

I have a file called sds
$head sds
2557 386 fs://name/user/hive/ware/doc1/do_fact/date=20190313/fact=6
2593 393 fs://name/user/hive/ware/toc1/do_gas/idi_centr=6372/mes=20
2594 343 fs://name/user/hive/ware/dac2/do_gas2/idi_centr=6354/mes=21
349 307 fs://name/user/hive/ware/tec2/do_des/mes=25
340 332 fs://name/user/hive/ware/dc1/venta/year=2018/month=12
I want delete /user/hive/ware and replace $7 ~ /_1$ for 1and other $7 for 2 using awk.
The code that I used was:
awk -F"/" '{ if ($7 ~ /_1$/)
print $1"//"$3"/1/"$7-$NF
else
print $1"//"$3"/2/"$7-$NF}' sds
but the result is bad.
I would like and output like:
2557 386 fs://name/1/do_fact/date=20190313/fact=6
2593 393 fs://name/1/do_gas/idi_centr=6372/mes=20
2594 343 fs://name/2/do_gas2/idi_centr=6354/mes=21
349 307 fs://name/2/do_des/mes=25
340 332 fs://name/1/venta/year=2018/month=12
$ awk 'BEGIN{FS=OFS="/"} {sub("/user/hive/ware",""); $4=($4~/1$/ ? 1 : 2)} 1' file
2557 386 fs://name/1/do_fact/date=20190313/fact=6
2593 393 fs://name/1/do_gas/idi_centr=6372/mes=20
2594 343 fs://name/2/do_gas2/idi_centr=6354/mes=21
349 307 fs://name/2/do_des/mes=25
340 332 fs://name/1/venta/year=2018/month=12
or if you don't REALLY want to remove the string /user/hive/share and instead want to remove the 4th through 6th fields no matter their value:
$ awk 'BEGIN{FS=OFS="/"} {$4=$5=$6="\n"; sub(/(\/\n){3}/,""); $4=($4~/1$/ ? 1 : 2)} 1' file
2557 386 fs://name/1/do_fact/date=20190313/fact=6
2593 393 fs://name/1/do_gas/idi_centr=6372/mes=20
2594 343 fs://name/2/do_gas2/idi_centr=6354/mes=21
349 307 fs://name/2/do_des/mes=25
340 332 fs://name/1/venta/year=2018/month=12
with sed
$ sed -E 's_/user/hive/ware/[^/]+(.)/_/\1/_' file
2557 386 fs://name/1/do_fact/date=20190313/fact=6
2593 393 fs://name/1/do_gas/idi_centr=6372/mes=20
2594 343 fs://name/2/do_gas2/idi_centr=6354/mes=21
349 307 fs://name/2/do_des/mes=25
340 332 fs://name/1/venta/year=2018/month=12
it's not really a conditional replacement.
You can use awk and its gsub function to perform replacement on the selected columns.
awk 'BEGIN{FS=OFS="/"}{gsub("user/hive/ware/","");gsub(/^[^12]+/,"",$4)}1' inputfile
2557 386 fs://name/1/do_fact/date=20190313/fact=6
2593 393 fs://name/1/do_gas/idi_centr=6372/mes=20
2594 343 fs://name/2/do_gas2/idi_centr=6354/mes=21
349 307 fs://name/2/do_des/mes=25
340 332 fs://name/1/venta/year=2018/month=12

Forming a variable for a graph using result of searching with awk

I´m using cacti to graph CPU usage of equipment with 7 modules, the command used shows 12 samples for each module. I need to use awk to find the pattern of each module name and after form a variable with this sintaxis [module]:[12th CPU sample], for example: MSCBC05:47
Below a extract of command output mentioned:
ACT AD-46 TIME 141216 1556 MSCBC05
PROCESSOR LOAD DATA
INT PLOAD CALIM OFFDO OFFDI FTCHDO FTCHDI OFFMPH OFFMPL FTCHMPH FTCHMPL
1 46 56250 656 30 656 30 1517 2 1517 2
2 47 56250 659 32 659 32 1448 1 1448 1
3 46 56250 652 22 652 22 1466 1 1466 1
4 47 56250 672 33 672 33 1401 0 1401 0
5 47 56250 674 38 674 38 1446 2 1446 2
6 45 56250 669 22 669 22 1365 1 1365 1
7 45 56250 674 26 674 26 1394 2 1394 2
8 46 56250 664 24 664 24 1396 0 1396 0
9 47 56250 686 24 686 24 1425 2 1425 2
10 47 56250 676 31 676 31 1386 0 1386 0
11 48 56250 702 25 702 25 1414 2 1414 2
12 47 56250 703 31 703 31 1439 2 1439 2
Complete output
https://dl.dropboxusercontent.com/u/33222611/raw_output.txt
I suggest
awk '$1 == "ACT" { sub(/\r/, ""); curmsc = $6 } curmsc != "" && $1 == "12" { print curmsc ":" $2; curmsc = "" }' raw_output.txt
Written more readably, that is
$1 == "ACT" { # In the first line of an ACT block
sub(/\r/, "") # remove the trailing carriage return. Could also use todos or so.
curmsc = $6 # remember MSC
}
curmsc != "" && $1 == "12" { # if we are in such a block and the first token is 12
print curmsc ":" $2 # print the stuff we want to know
curmsc = "" # then flag that we're outside a block
}

awk capture input quotes as a single field

So I have this dataset, that the first column starts with the name inside quotes. Is it possible to capture the name as a single field?
"Mazda RX4" 21 6 160 110 3.9 2.62 16.46 0 1 4 4
"Mazda RX4 Wag" 21 6 160 110 3.9 2.875 17.02 0 1 4 4
"Datsun 710" 22.8 4 108 93 3.85 2.32 18.61 1 1 4 1
"Hornet 4 Drive" 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
"Hornet Sportabout" 18.7 8 360 175 3.15 3.44 17.02 0 0 3 2
"Valiant" 18.1 6 225 105 2.76 3.46 20.22 1 0 3 1
"Duster 360" 14.3 8 360 245 3.21 3.57 15.84 0 0 3 4
"Merc 240D" 24.4 4 146.7 62 3.69 3.19 20 1 0 4 2
"Merc 230" 22.8 4 140.8 95 3.92 3.15 22.9 1 0 4 2
"Merc 280" 19.2 6 167.6 123 3.92 3.44 18.3 1 0 4 4
Note that sometimes the name is single field (like "Valiant"), sometimes 2 (like "Mazda RX4" or 3 "Mazda RX4 Wag")
So base on the number of fields, I came up with this awk code that works as I wanted, however I wonder if there is any other systematic way to do so?
awk '{name=$1; for (i=2; i<=NF-11; i++) name=name " " $i; printf "%s\n", name}' data/mtcars.dat | head
Mazda RX4
Mazda RX4 Wag
Datsun 710
Hornet 4 Drive
Hornet Sportabout
Valiant
Duster 360
Merc 240D
Merc 230
Merc 280
You could use " as the input field separator. That would assign an empty field to $1, the full name to $2, and the rest of the line to $3.
$ awk 'BEGIN{FS="\""}{print $2}' < test.dat
Mazda RX4
Mazda RX4 Wag
Datsun 710
Hornet 4 Drive
Hornet Sportabout
Valiant
Duster 360
Merc 240D
Merc 230
Merc 280
Just to make it as short as possible:
awk -F\" '$0=$2' file
Mazda RX4
Mazda RX4 Wag
Datsun 710
Hornet 4 Drive
Hornet Sportabout
Valiant
Duster 360
Merc 240D
Merc 230
Merc 280
Or some more robust:
awk -F\" '{$0=$2}1' file
awk NF=1 FPAT='[^"]+'
Result
Mazda RX4
Mazda RX4 Wag
Datsun 710
Hornet 4 Drive
Hornet Sportabout
Valiant
Duster 360
Merc 240D
Merc 230
Merc 280

How to take the 10 last values from a file in awk

I have a file that contains this data:
345
234
232
454
343
676
887
324
342
355
657
786
343
879
088
342
121
345
534
657
767
I need to cut the last 10 values and put it in another file:
786
343
879
388
342
121
345
534
657
767
No need to use awk, just tail the data:
tail -n 10 input.txt > example.txt
If you really wanted to use awk, you have to keep track of the 10 last lines ($0) and print them in END. Though this would be overkill.
This is what the tail command is for:
$ tail -10 file
786
343
879
088
342
121
345
534
657
767
To store the output in a new file use to redirection operator:
$ tail -10 file > new_file
However if you really want to do it with awk then the brute force approach is to store each line in an array and print the last 10 elements at the end of file:
$ awk '{a[NR]=$0}END{for(i=(NR-9);i<=NR;i++)print a[i]}' file
786
343
879
088
342
121
345
534
657
767
Again, to store the output in a new file use the redirection operator:
$ awk '{a[NR]=$0}END{for(i=(NR-9);i<=NR;i++)print a[i]}' file > new_file
The previous method is a very inefficient method as we have to create an array the same size as the file we are reading. A much better approach is to use the modulus operator to just create an array of size 10 containing the last 10 lines read:
$ awk '{a[NR%10]=$0}END{for(i=NR%10+1;j++<10;i=++i%10) print a[i]}' file 
786
343
879
088
342
121
345
534
657
767
This can be generalized to the last n lines like so (i.e n=3):
$ awk '{a[NR%n]=$0}END{for(i=NR%n+1;j++<n;i=++i%n) print a[i]}' n=3 file
534
657
767
Code for GNU sed:
get the last 10 lines
sed ':a $q;N;11,$D;ba' file
get all but the last 10 lines
sed ':a $d;N;2,10ba;P;D' file