I have a data set: (file.txt)
X Y
1 a
2 b
3 c
10 d
11 e
12 f
15 g
20 h
25 i
30 j
35 k
40 l
41 m
42 n
43 o
46 p
I have two Up10 and Down10 columns,
Up10: From (X) to (X-10) count of row.
Down10 : From (X) to (X+10)
count of row
For example:
X Y Up10 Down10
35 k 3 5
For Up10; 35-10 X=35 X=30 X=25 Total = 3 row
For Down10; 35+10 X=35 X=40 X=41 X=42 X=42 Total = 5 row
I have tried, but i cant show 3rd and 4rth column:
awk 'BEGIN{ FS=OFS="\t" }
NR==FNR{
a[$1]+=$3
next
}
{ $(NF+10)=a[$3] }
{ $(NF-10)=a[$4] }
1
' file.txt file.txt > file-2.txt
Desired Output:
X Y Up10 Down10
1 a 1 5
2 b 2 5
3 c 3 4
10 d 4 5
11 e 5 4
12 f 5 3
15 g 4 3
20 h 5 3
25 i 3 3
30 j 3 3
35 k 3 5
40 l 3 5
41 m 3 4
42 n 4 3
43 o 5 2
46 p 5 1
This is the Pierre François' solution: Thanks again #Pierre François
awk '
BEGIN{OFS="\t"; print "X\tY\tUp10\tDown10"}
(NR == FNR) && (FNR > 1){a[$1] = $1 + 0}
(NR > FNR) && (FNR > 1){
up = 0; upl = $1 - 10
down = 0; downl = $1 + 10
for (i in a) { i += 0 # tricky: convert i to integer
if ((i >= upl) && (i <= $1)) {up++}
if ((i >= $1) && (i <= downl)) {down++}
}
print $1, $2, up, down;
}
' file.txt file.txt > file-2.txt
This is the Pierre François' solution: Thanks again #Pierre François
awk '
BEGIN{OFS="\t"; print "X\tY\tUp10\tDown10"}
(NR == FNR) && (FNR > 1){a[$1] = $1 + 0}
(NR > FNR) && (FNR > 1){
up = 0; upl = $1 - 10
down = 0; downl = $1 + 10
for (i in a) { i += 0 # tricky: convert i to integer
if ((i >= upl) && (i <= $1)) {up++}
if ((i >= $1) && (i <= downl)) {down++}
}
print $1, $2, up, down;
}
' file.txt file.txt > file-2.txt
Related
I have input.txt like so:
237 #
0 2 3 4 0. ABC
ABC
DEF
# 237
0 1 4 7 2 0.
0 3 8 9 1 0. GHI
XYZ
(a) If a row contains the symbol #, then, in the output, I want a newline/blankline.
(b) If a row starts with a 0 and contains 0. then, the interval of such entries excepting the terminating 0. should be displayed.
The following script accomplishes (b)
awk '{
for (i=1; i<NF; i++)
if($i == "0")
{arr[NR] = $i}
else
if ($i == "0.")
{break}
else
{arr[NR]=arr[NR]" "$(i)}}
($1 == "0") {print arr[NR]}
' input.txt > output.txt
so that the output is:
0 2 3 4
0 1 4 7 2
0 3 8 9 1
How can (a) be accomplished so that the output is:
// <----Starting newline
0 2 3 4
0 1 4 7 2
0 3 8 9 1
try add if ($0 ~ /#/) {print ""}
so
awk '{
for (i=1; i<NF; i++)
if($i == "0")
{arr[NR] = $i}
else
if ($i == "0.")
{break}
else
{arr[NR]=arr[NR]" "$(i)}
if ($0 ~ /#/) {print ""}
($1 == "0") {print arr[NR]}
' soinput.txt > output.txt
Is this what you're trying to do?
$ awk '/#/{print ""} /^0/ && sub(/ 0\..*/,"")' file
0 2 3 4
0 1 4 7 2
0 3 8 9 1
Assuming I have the following text file:
a b c d 1 2 3
e f g h 1 2 3
i j k l 1 2 3
m n o p 1 2 3
How do I replace '1 2 3' with '4 5 6' in the line that contains the letter (e) and move it after the line that contains the letter (k)?
N.B. the line that contains the letter (k) may come in any location in the file, the lines are not assumed to be in any order
My approach is
Remove the line I want to replace
Find the lines before the line I want to move it after
Find the lines after the line I want to move it after
append the output to a file
grep -v 'e' $original > $file
grep -B999 'k' $file > $output
grep 'e' $original | sed 's/1 2 3/4 5 6/' >> $output
grep -A999 'k' $file | tail -n+2 >> $output
rm $file
mv $output $original
but there is a lot of issues in this solution:
a lot of grep commands that seems unnecessary
the argument -A999 and -B999 are assuming the file would not contain lines more than 999, it would be better to have another way to get lines before and after the matched line
I am looking for a more efficient way to achieve that
Using sed
$ sed '/e/{s/1 2 3/4 5 6/;h;d};/k/{G}' input_file
a b c d 1 2 3
i j k l 1 2 3
e f g h 4 5 6
m n o p 1 2 3
Here is a GNU awk solution:
awk '
/\<e\>/{
s=$0
sub("1 2 3", "4 5 6", s)
next
}
/\<k\>/ && s {
printf("%s\n%s\n",$0,s)
next
} 1
' file
Or POSIX awk:
awk '
function has(x) {
for(i=1; i<=NF; i++) if ($i==x) return 1
return 0
}
has("e") {
s=$0
sub("1 2 3", "4 5 6", s)
next
}
has("k") && s {
printf("%s\n%s\n",$0,s)
next
} 1
' file
Either prints:
a b c d 1 2 3
i j k l 1 2 3
e f g h 4 5 6
m n o p 1 2 3
This works regardless of the order of e and k in the file:
awk '
function has(x) {
for(i=1; i<=NF; i++) if ($i==x) return 1
return 0
}
has("e") {
s=$0
sub("1 2 3", "4 5 6", s)
next
}
FNR<NR && has("k") && s {
printf("%s\n%s\n",$0,s)
s=""
next
}
FNR<NR
' file file
This awk should work for you:
awk '
/(^| )e( |$)/ {
sub(/1 2 3/, "4 5 6")
p = $0
next
}
1
/(^| )k( |$)/ {
print p
p = ""
}' file
a b c d 1 2 3
i j k l 1 2 3
e f g h 4 5 6
m n o p 1 2 3
This might work for you (GNU sed):
sed -n '/e/{s/1 2 3/4 5 6/;s#.*#/e/d;/k/s/.*/\&\\n&/#p};' file | sed -f - file
Design a sed script by passing the file twice and applying the sed instructions from the first pass to the second.
Another solution is to use ed:
cat <<\! | ed file
/e/s/1 2 3/4 5 6/
/e/m/k/
wq
!
Or if you prefer:
<<<$'/e/s/1 2 3/4 5 6/\n.m/k/\nwq' ed -s file
I have a column that looks like this:
A 1
B 3
C 5
D 4
E 7
F 1
G 1
H 3
For every filed in column#2, I want to calculate the range (max-min) of 3 field radius up and down.
A range(1 3 5 4)
B range(1 3 5 4 7)
C range(1 3 5 4 7 1)
D range(1 3 5 4 7 1 1)
E range( 3 5 4 7 1 1 3)
F range( 5 4 7 1 1 3)
G range( 4 7 1 1 3)
H range( 7 1 1 3)
How can do this in awk?
I could do the same in perl using:
my $set_size = #values;
for ( my $i = 0 ; $i < $set_size ; $i++ ) {
my $min = $i - 4;
if ( $min < 0 ) { $min = 0; }
my $max = $i + 4;
if ( $max > ( $set_size - 1 ) ) { $max = $set_size - 1; }
my $min_val = $values[$min];
my $max_val = $values[$min];
for ( my $j = $min ; $j <= $max ; $j++ ) {
if ( $values[$j] <= $min_val ) { $min_val = $values[$j]; }
if ( $values[$j] >= $max_val ) { $max_val = $values[$j]; }
}
my $range = $max_val - $min_val;
printf "$points[$i] %.15f\n", $range;
}
idk exactly what I want to calculate the range (max-min) of 3 field radius up and down. means but to get the output you posted from the input you posted would be:
$ cat tst.awk
{
keys[NR] = $1
values[NR] = $2
}
END {
range = 3
for (i=1; i<=NR; i++) {
min = ( (i - range) >= 1 ? i - range : 1 )
max = ( (i + range) <= NR ? i + range : NR )
printf "%s range(", keys[i]
for (j=min; j<=max; j++) {
printf "%s%s", values[j], (j<max ? " " : ")\n")
}
}
}
$
$ awk -f tst.awk file
A range(1 3 5 4)
B range(1 3 5 4 7)
C range(1 3 5 4 7 1)
D range(1 3 5 4 7 1 1)
E range(3 5 4 7 1 1 3)
F range(5 4 7 1 1 3)
G range(4 7 1 1 3)
H range(7 1 1 3)
Your sample perl doesn't print out your sample output, it seems to do something different... so here's how I'd do it in perl:
#!/usr/bin/perl
use warnings;
use strict;
use feature qw/say/;
use List::Util qw/min max/;
my (#col1, #col2);
while (<>) {
chomp;
my ($v1, $v2) = split;
push #col1, $v1;
push #col2, $v2;
}
my #prefix;
for my $i (0 .. $#col1) {
my #range = #col2[max($i - 3, 0) .. min($i + 3, $#col2)];
push #prefix, ' ' if $i > 3;
unshift #range, #prefix;
say "$col1[$i] range(#range)"
}
running it:
$ perl range.pl input.txt
A range(1 3 5 4)
B range(1 3 5 4 7)
C range(1 3 5 4 7 1)
D range(1 3 5 4 7 1 1)
E range( 3 5 4 7 1 1 3)
F range( 5 4 7 1 1 3)
G range( 4 7 1 1 3)
H range( 7 1 1 3)
The formatting will break if any of the numbers are greater than 9, though.
Since you tagged tcl:
#!/usr/bin/env tclsh
lassign $argv file size
set fh [open $file r]
# assume exactly 2 space-separated words per line
set data [dict create {*}[split [read -nonewline $fh]]]
close $fh
set len [dict size $data]
set values [dict values $data]
set i 0
dict for {key _} $data {
set first [expr {max($i - $size, 0)}]
set last [expr {min($i + $size, $len)}]
puts [format "%s range(%s%s)" \
$key \
[string repeat " " $first] \
[lrange $values $first $last] \
]
incr i
}
outputs
A range(1 3 5 4)
B range(1 3 5 4 7)
C range(1 3 5 4 7 1)
D range(1 3 5 4 7 1 1)
E range( 3 5 4 7 1 1 3)
F range( 5 4 7 1 1 3)
G range( 4 7 1 1 3)
H range( 7 1 1 3)
TXR at the shell prompt:
bash$ txr -c '#(collect)
#label #num
#(end)
#(bind rng #[window-map 3 nil (opip list (remq nil) (mapcar toint)) num])
#(output)
# (repeat)
#label range(#rng) -> #(find-min rng)..#(find-max rng)
# (end)
#(end)'
A 1
B 3
C 5
D 4
E 7
F 1
G 1
H 3
[Ctrl-D][Enter]
A range(1 3 5 4) -> 1..5
B range(1 3 5 4 7) -> 1..7
C range(1 3 5 4 7 1) -> 1..7
D range(1 3 5 4 7 1 1) -> 1..7
E range(3 5 4 7 1 1 3) -> 1..7
F range(5 4 7 1 1 3) -> 1..7
G range(4 7 1 1 3) -> 1..7
H range(7 1 1 3) -> 1..7
can you explain to me why this simple onliner does not work? Thanks for your time.
awk 'BEGIN{i=1}{if($2 == i){print $0} else{print "0",i} i=i+1}' check
input text file with name "check":
a 1
b 2
c 3
e 5
f 6
g 7
desired output:
a 1
b 2
c 3
0 4
e 5
f 6
g 7
output received:
a 1
b 2
c 3
0 4
0 5
0 6
awk 'BEGIN{i=1}{ if($2 == i){print $0; } else{print "0",i++; print $0 } i++ }' check
increment i one more time in the else (you are inserting a new line)
print the currentline in the else, too
this works only if there is only one line missing between the present lines, otherwise you need a loop printing the missing lines
Or simplified:
awk 'BEGIN{i=1}{ if($2 != i){print "0",i++; } print $0; i++ }' check
Yours is broken because:
you read the next line ("e 5"),
$2 is not equal to your counter,
you print the placeholder line and increment your counter (to 5),
you do not print the current line
you read the next line ("f 6")
goto 2
A while loop is warranted here -- that will also handle the case when you have gaps greater than a single number.
awk '
NR == 1 {prev = $2}
{
while ($2 > prev+1)
print "0", ++prev
print
prev = $2
}
' check
or, if you like impenetrable one-liners:
awk 'NR==1{p=$2}{while($2>p+1)print "0",++p;p=$2}1' check
All you need is:
awk '{while (++i<$2) print 0, i}1' file
Look:
$ cat file
a 1
b 2
c 3
e 5
f 6
g 7
k 11
n 14
$ awk '{while (++i<$2) print 0, i}1' file
a 1
b 2
c 3
0 4
e 5
f 6
g 7
0 8
0 9
0 10
k 11
0 12
0 13
n 14
I have data in the following format:
ID Date X1 X2 X3
1 01/01/00 1 2 3
1 01/02/00 7 8 5
2 01/03/00 9 7 1
2 01/04/00 1 4 5
I would like to group measurements into new rows according to ID, so I end up with:
ID Date X1 X2 X3 Date X1_2 X2_2 X3_2
1 01/01/00 1 2 3 01/02/00 7 8 5
2 01/03/00 9 7 1 01/04/00 1 4 5
etc.
I have as many as 20 observations for a given ID.
So far I have tried the technique given by http://gadgetsytecnologia.com/da622c17d34e6f13e/awk-transpose-childids-column-into-row.html
The code I have tried so far is:
awk -F, OFS = '\t' 'NR >1 {a[$1] = a[$1]; a[$2] = a[$2]; a[$3] = a[$3];a[$4] = a[$4]; a[$5] = a[$5] OFS $5} END {print "ID,Date,X1,X2,X3,Date_2,X1_2, X2_2 X3_2'\t' for (ID in a) print a[$1:$5] }' file.txt
The file is a tab delimited file. I don't know how to manipulate the data, or to account for the fact that there will be more than two observations per person.
Just keep track of what was the previous first field. If it changes, print the stored line:
awk 'NR==1 {print; next} # print header
prev && $1!=prev {print prev, line; line=""} # print on different $1
{prev=$1; $1=""; line=line $0} # store data and remove $1
END {print prev, line}' file # print trailing line
If you have tab-separated fields, just add -F"\t".
Test
$ awk 'NR==1 {print; next} prev && $1!=prev {print prev, line; line=""} {prev=$1; $1=""; line=line $0} END {print prev, line}' a
ID Date X1 X2 X3
1 01/01/00 1 2 3 01/02/00 7 8 5
2 01/03/00 9 7 1 01/04/00 1 4 5
you can try this (gnu-awk solution)
gawk '
NR == 1 {
N = NF;
MAX = NF-1;
for(i=1; i<=NF; i++){ #store columns names
names[i]=$i;
}
next;
}
{
for(i=2; i<=N; i++){
a[$1][length(a[$1])+1] = $i; #store records for each id
}
if(length(a[$1])>MAX){
MAX = length(a[$1]);
}
}
END{
firstline = names[1];
for(i=1; i<=MAX; i++){ #print first line
column = int((i-1)%(N-1))+2
count = int((i-1)/(N-1));
firstline=firstline OFS names[column];
if(count>0){
firstline=firstline"_"count
}
}
print firstline
for(id in a){ #print each record in store
line = id;
for(i=1; i<=length(a[id]); i++){
line=line OFS a[id][i];
}
print line;
}
}
' input
input
ID Date X1 X2 X3
1 01/01/00 1 2 3
1 01/02/00 7 8 5
2 01/03/00 9 7 1
2 01/04/00 1 4 5
1 01/03/00 72 28 25
you get
ID Date X1 X2 X3 Date_1 X1_1 X2_1 X3_1 Date_2 X1_2 X2_2 X3_2
1 01/01/00 1 2 3 01/02/00 7 8 5 01/03/00 72 28 25
2 01/03/00 9 7 1 01/04/00 1 4 5