I have data which looks like this
1 3
1 2
1 9
5 4
4 6
5 6
5 8
5 9
4 2
I would like the output to be
1 3,2,9
5 4,6,8,9
4 6,2
This is just sample data but my original one has lots more values.
So this worked
So this basically creates a hash table, using the first column as a key and the second column of the line as the value:
awk '{line="";for (i = 2; i <= NF; i++) line = line $i ", "; table[$1]=table[$1] line;} END {for (key in table) print key " => " table[key];}' trial.txt
OUTPUT
4 => 6, 2
5 => 4, 6, 8, 9
1 => 3, 2, 9
I'd write
awk -v OFS=, '
{
key = $1
$1 = ""
values[key] = values[key] $0
}
END {
for (key in values) {
sub(/^,/, "", values[key])
print key " " values[key]
}
}
' file
If you want only the unique values for each key (requires GNU awk for multi-dimensional arrays)
gawk -v OFS=, '
{ for (i=2; i<=NF; i++) values[$1][$i] = i }
END {
for (key in values) {
printf "%s ", key
sep = ""
for (val in values[key]) {
printf "%s%s", sep, val
sep = ","
}
print ""
}
}
' file
or perl
perl -lane '
$key = shift #F;
$values{$key}{$_} = 1 for #F;
} END {
$, = " ";
print $_, join(",", keys %{$values{$_}}) for keys %values;
' file
if not concerned with the order of the keys, I think this is the idiomatic awk solution.
$ awk '{a[$1]=($1 in a?a[$1]",":"") $2}
END{for(k in a) print k,a[k]}' file |
column -t
4 6,2
5 4,6,8,9
1 3,2,9
Related
I am working on a variant calling format (vcf) file, and I tried to show you guys what I am trying to do:
Input:
1 877803 838425 GC G
1 878077 966631 C CCACGG
Output:
1 877803 838425 C -
1 878077 966631 - CACGG
In summary, I am trying to delete the first letters of longer strings.
And here is my code:
awk 'BEGIN { OFS="\t" } /#/ {next}
{
m = split($4, a, //)
n = split($5, b, //)
x = "-"
delete y
if (m>n){
for (i = n+1; i <= m; i++) {
y = sprintf("%s", a[i])
}
print $1, $2, $3, y, x
}
else if (n>m){
for (j = m+1; i <= n; i++) {
y = sprintf("%s", b[j]) ## Problem here
}
print $1, $2, $3, x, y
}
}' input.vcf > output.vcf
But,
I am getting the following error in line 15, not even in line 9
awk: cmd. line:15: (FILENAME=input.vcf FNR=1) fatal: attempt to use array y in a scalar context
I don't know how to concatenate array elements into a one string using awk.
I will be very happy if you guys help me.
Merry X-Mas!
You may try this awk:
awk -v OFS="\t" 'function trim(s) { return (length(s) == 1 ? "-" : substr(s, 2)); } {$4 = trim($4); $5 = trim($5)} 1' file
1 877803 838425 C -
1 878077 966631 - CACGG
More readable form:
awk -v OFS="\t" 'function trim(s) {
return (length(s) == 1 ? "-" : substr(s, 2))
}
{
$4 = trim($4)
$5 = trim($5)
} 1' file
You can use awk's substr function to process the 4th and 5th space delimited fields:
awk '{ substr($4,2)==""?$4="-":$4=substr($4,2);substr($5,2)==""?$5="-":$5=substr($5,2)}1' file
If the string from position 2 onwards in field 4 is equal to "", set field 4 to "-" otherwise, set field 4 to the extract of the field from position 2 to the end of the field. Do the same with field 5. Print lines modified or not with short hand 1.
I need to get results of this formula - a column of numbers
{x = ($1-T1)/Fi; print (x-int(x))}
from inputs file1
4 4
8 4
7 78
45 2
file2
0.2
3
2
1
From this files should be 4 outputs.
$1 is the first column from file1, T1 is the first line in first column of the file1 (number 4) - it is alway this number, Fi, where i = 1, 2, 3, 4 are numbers from the second file. So I need a cycle for i from 1 to 4 and compute the term one times with F1=0.2, the second output with F2=3, then third output with F3=2 and the last output will be for F4=1. How to express T1 and Fi in this way and how to do a cycle?
awk 'FNR == NR { F[++n] = $1; next } FNR == 1 { T1 = $1 } { for (i = 1; i <= n; ++i) { x = ($1 - T1)/F[i]; print x - int(x) >"output" FNR} }' file2 file1
This gives more than 4 outputs. What is wrong please?
FNR == 1 { T1 = $1 } is being run twice, when file2 is started being read T1 is set to 0.2,
>"output" FNR is problematic, you should enclose the output name expression in parentheses.
Here's how I'd do it:
awk '
NR==1 {t1=$1}
NR==FNR {f[NR]=$1; next}
{
fn="output"FNR
for(i in f) {
x=(f[i]-t1)/$1
print x-int(x) >fn
}
close(fn)
}
' file1 file2
I'm a very recent command line user thus I'm requiring some help to split a text file by columns using awk. The difficulty for me is that I want the ith filename to be the text from the 1st row of the ith column.
This is what I had in mind:
awk '{for(i = 2; i <= NF; i++){name= ??FNR == 1 $i?? ;print $1, $i > name}}' myfile.txt
But I don't know how to set the name variable...
Input: myfile.txt
'ID' 'sample_1' 'sample_2' ...
'id_1' 1 2 ...
'id_2' 2 3 ...
Excpected output:
sample_1.txt:
'ID' 'sample_1'
'id_1' 1
'id_2' 2
sample_2.txt:
'ID' 'sample_2'
'id_1' 2
'id_2' 3
Thanks
You should keep column headers in an array.
awk 'NR==1 {
for (i=2; i<=NF; ++i) {
fnames[i] = gensub(/\x27/, "", "g", $i)
print $1, $i > fnames[i] ".txt"
}
next
}
{
for (i=2; i<=NF; ++i)
print $1, "\x27" $i "\x27" > fnames[i] ".txt"
}' myfile.txt
\x27 is single quote in hex-escaped form
gensub(/\x27/, "", "g", $i) removes single quotes from column headers to name output files as you wanted.
You can try this awk :
awk -F'\t' ' # tab as field separator
{
for ( i = 2 ; i <= NF ; i++ ) { # for each record loop from field 2 to last field
if ( NR == 1 ) { # if first record
a[i] = $i # keep each field in array a
gsub ( /^'\''|'\''$/ , "" , a[i] ) # remove quote at start and end in array a
}
print $1 FS $i > a[i]".txt" # print needed field in corresponding file
}
}' myfile.txt
the data is something like
"1||2""3""2||3""5""4||3""6""43""4||4||3""4||3", 43 ,"4||3""43""3||4||4||3"
i've tried this myself
BEGIN {
FPAT = "(\"[^\"]+\")|([ ])"
}
{
print "NF = ", NF
for (i = 1; i <= NF; i++) {
printf("$%d = <%s>\n", i, $i)
}
}
but the problem is it's giving me an output like
$ gawk -f prog4.awk data1.txt
NF = 18
$1 = <"1||2">
$2 = <"3">
$3 = <"2||3">
$4 = <"5">
$5 = <"4||3">
$6 = <"6">
$7 = <"43">
$8 = <"4||4||3">
$9 = <"4||3">
$10 = <,>
$11 = < >
$12 = <4>
$13 = <3>
$14 = < >
$15 = <,>
$16 = <"4||3">
$17 = <"43">
$18 = <"3||4||4||3">
>
as you can see $10 to $15 each and every character is taken. help appreciated.
Let's try approaching this a different way - if the following is not what you are looking for, please tell us in what way(s) it differs from your desired output and why:
$ cat tst.awk
BEGIN { FPAT="\"[^\"]+\"" }
{
for (i=1; i<=NF; i++) {
print i, "<" $i ">"
}
}
$
$ gawk -f tst.awk file
1 <"1||2">
2 <"3">
3 <"2||3">
4 <"5">
5 <"4||3">
6 <"6">
7 <"43">
8 <"4||4||3">
9 <"4||3">
10 <"4||3">
11 <"43">
12 <"3||4||4||3">
awk '{for (i = 1; i <= NF; i++) {gsub(/[^[:alnum:]]/, " "); print tolower($i)": "NR | "sort -V | uniq";}}' input.txt
With above code, I get output as:
line1: 2
line1: 3
line1: 5
line2: 1
line2: 2
line3: 10
I want it like below:
line1: 2, 3, 5
line2: 1, 2
lin23: 10
How to achieve it?
Use gawk's array features. I'll provide actual code once I hack it up.
awk '{for (i = 1; i <= NF; i++) {
gsub(/[^[:alnum:]]/, " ");
arr[tolower($i)] = arr[tolower($i)]NR", "}
}
END {
for (x in arr) {
print x": "substr(arr[x], 1, length(arr[x])-2);
}}' input.txt | sort
Note that this includes duplicate line numbers if a word appears multiple times on the same lines.
using perl...
#!/usr/bin/perl
while(<>){
if( /(\w+):\s*(\d+)/){ # extract the parts
$arr{lc($1)}{$2} ++ # count them
};
}
for my $k (sort keys %arr){ # print sorted alpha
print "$k: ";
$lines=$arr{$k};
print join(", ",(sort {$a<=>$b} keys %$lines)),"\n"; print sorted numerically
}
This solution removes ans sorts the duplicated numbers. Is this what you needed?