How can I replace sequences of space-delimited numbers when those sequences span multiple lines and the input is too big to fit in RAM.
A sample input would be:
edit: I re-worked the sample input and input parameters for introducing border cases (excluding ones that have to do with the length of the matched sequence or replacement priorities)
3 12 3 4
0 6 7 10
8 9 12 3
4 6 7 8
10 6 6 7
9 199 10 11
11
note: the number of fields per line is homogeneous but not known in advance; the last line might contain less fields
From that input I would like to:
replace 3 4 with &
replace 6 7 8 with 9 9
replace 6 7 9 with 8 8
replace 7 10 with 11 12
replace 0 with nothing
replace 10 with 13 10
replace 8 9 12 3 5 with #
The expected output would have one number or replacement per line:
3
12
&
6
11 12
8
9
12
&
9 9
13 10
6
8 8
199
13 10
11
11
I'm trying to do the task with awk but I'm having a hard time implementing a dynamic state machine with a pseudo B-Tree:
tr -s '[:space:]' '\n' < input.txt |
awk '
BEGIN {
for (i = 2; i < ARGC; i += 2) {
n = split(ARGV[i], arr)
k = ""
for (j = 1; j <= n; j++) {
k = j SUBSEP k SUBSEP arr[j]
Tree[k]
}
Tree[k] = "$" ARGV[i+1] #=> now can test "if (Tree[k])"
delete ARGV[i]
delete ARGV[i+1]
}
}
{
Key = (int(Key) + 1) SUBSEP Key SUBSEP $1
if ( Key in Tree ) {
if (Tree[Key]) {
print substr(Tree[Key],2)
Buffer = ""
Key = ""
}
else
Buffer = Buffer $1 "\n"
} else {
print Buffer $1
Buffer = ""
Key = ""
}
}
END { if (Buffer != "") printf ("%s", Buffer) }
' - \
'3 4' '&' \
'6 7 8' '9 9' \
'6 7 9' '8 8' \
'7 10' '11 12' \
'0' '' \
'10' '13 10' \
'8 9 12 3 5' '#'
edit: I realised that the code doesn't backtrack after failing to find a complete match in the B-tree, so it's wrong...
How I'm planning to tackle the problem
I'm emulating a B-tree with an array and keys in the following format:
from the middle to the left of the key are the consecutive depths
from the middle to the right of the key are the consecutive values
When a key exists in Tree:
if it doesn't have an associated value then it's a node
if there's a value then it's a leaf
So, for the current input parameters, the content of the Tree array will be:
# from param: "3 4" => "&"
Tree[ 1,"",3 ]
Tree[2,1,"",3,4] = "$&"
# from param: "6 7 8" => "9 9"
Tree[ 1,"",6 ]
Tree[ 2,1,"",6,7 ]
Tree[3,2,1,"",6,7,8] = "$9 9"
# from param: "6 7 9" => "8 8"
Tree[ 1,"",6 ]
Tree[ 2,1,"",6,7 ]
Tree[3,2,1,"",6,7,9] = "$8 8"
# from param: "7 10" => "11 12"
Tree[ 1,"",7 ]
Tree[2,1,"",7,10] = "$11 12"
# from param: "0" => ""
Tree[1,"",0] = "$"
# from param: "10" => "13 10"
Tree[1,"",10] = "$13 10"
# from param: "8 9 12 3 5" => "#"
Tree[ 1,"",8 ]
Tree[ 2,1,"",8,9 ]
Tree[ 3,2,1,"",8,9,12 ]
Tree[ 4,3,2,1,"",8,9,12,3 ]
Tree[5,4,3,2,1,"",8,9,12,3,5] = "$#"
FWIW I'd approach this by figuring out the max number of records that you might need to search in based on the mappings you want, keep a rolling buffer of that number of records, and then do the comparison part on each buffer, e.g.:
$ cat tst.awk
BEGIN {
RS = "[[:space:]]+"
map("3,4" , "&")
map("6,7,8" , "9")
map("9" , "")
map("0" , "\\000")
map("13,10" , "10")
}
{ buf[((NR-1) % maxRecs) + 1] = $0 }
NR >= maxRecs { prt() }
END { prt() }
function prt( nr,sep,str) {
for ( nr=NR-maxRecs+1; nr<=NR; nr++ ) {
str = str sep buf[((nr-1) % maxRecs) + 1]
sep = ORS
}
print ">>>>" ORS str ORS "<<<<"
# Replace the above with something that loops through the
# strings you want replaced, e.g.
#
# for ( mapNr=1; mapNr<=numMaps; mapNr++ ) {
# old = olds[mapNr]
# if ( str ~ old ) { # add something to avoid partial matches
# new = news[mapNr]
# replace old with new in the output
# }
# }
}
function map(old,new, numRecs) {
++numMaps
numRecs = gsub(/,/,ORS,old) + 1
maxRecs = ( numRecs > maxRecs ? numRecs : maxRecs )
olds[numMaps] = old
news[numMaps] = new
}
$ awk -f tst.awk file
>>>>
112
3
4
<<<<
>>>>
3
4
6
<<<<
>>>>
4
6
7
<<<<
>>>>
6
7
8
<<<<
>>>>
7
8
9
<<<<
>>>>
8
9
12
<<<<
>>>>
9
12
0
<<<<
>>>>
12
0
3
<<<<
>>>>
0
3
4
<<<<
>>>>
3
4
15
<<<<
>>>>
4
15
255
<<<<
>>>>
15
255
13
<<<<
>>>>
255
13
10
<<<<
>>>>
13
10
6
<<<<
>>>>
10
6
7
<<<<
>>>>
6
7
8
<<<<
>>>>
7
8
199
<<<<
>>>>
8
199
9
<<<<
>>>>
199
9
0
<<<<
>>>>
9
0
13
<<<<
>>>>
9
0
13
<<<<
The above is just printing the buff-sized strings, the part to be added is replacing the target strings with the new ones in a way that the next target doesn't match the replaced part which is a common problem with, I expect, lots of solutions online so it's left as an exercise.
You'll also need to tweak it to make sure it doesn't revisit lines at the end of the input.
The above uses GNU awk for multi-char RS, if you don't have GNU awk then just pipe the input from tr -s '[:space:]' '\n' as shown in the question.
UPDATE:
previous answer (see edit revisions) was woefully slow (several minutes) when run against a ramped up input (7K mappings in map.txt; 25M tokens
in input.txt1)
new answer (below) is a complete rewrite and processes the 7K-mappings/25M-tokens in ~45 seconds
The main component of this design centers around a tree-like node structure used to manage the series of tokens (lines of input from map.txt):
tree [ParentNodeNbr] [token] [NodeType] = value
Where:
ParentNodeNbr == 0 for the root
token from map.txt
NodeType has one of two values 'node' or 'leaf'
for NodeType = 'node' the value stored in the array is a numeric node number (implemented as an counter that's incremented each time a new node is added to the tree); this node number becomes the ParentNodeNbr for the next token in the series
for NodeType = 'leaf' this designates the 'end' of a series of tokens (line of input from map.txt) and the value stored in the array is the line number (aka FNR) from map.txt; this line number (FNR) is used as an index into a couple other arrays and to determine precendence when an input sequence (from input.txt) has multiple matches from map.txt
when processing a series of tokens from a map.txt line of input we start at ParentNodeNbr == 0 looking for a series of matching nodes, adding new nodes as needed
Setup: storing replacements in a comma-delimited file (map.txt), and adding one additional line to input.txt:
$ head map.txt input.txt
==> map.txt <==
2 3 4,X # "2 3 4" has precendence over ...
2 3,Y # "2 3"
3 4,&
6 7 8,9
9,
0,\000
13 10,10
==> input.txt <==
2 3 4 # keep eye on "2 3" vs "2 3 4" precendence
112 3
4 6 7
8 9 12 0 3
4 15 255 13
10 6
7 8 199 9
0 13
NOTE: here's what tree[][][] looks like when populated from map.txt:
tree [Parent] [Token] [NodeType] = NodeVal
Parent Token NodeType NodeVal MapTo ** MapTo only applies to NodeType = leaf
====== ===== ======== ======= =====
0 0 leaf 6 "\000"
0 2 node 1
0 3 node 3
0 6 node 4
0 9 leaf 5 ""
0 13 node 6
1 3 node 2
1 3 leaf 1 "Y"
2 4 leaf 2 "X"
3 4 leaf 3 "&"
4 7 node 5
5 8 leaf 4 "9"
6 10 leaf 7 "10"
One GNU awk (for multidimensional arrrays):
awk '
function replace(op) {
while ( ((maxToken - minToken + 1) >= maxlen) || op == "flush" ) {
NodeNbr=root
minOrd=maxOrd
for (j=0 ; j<maxlen; j++) { # loop through tokens in buffer[]
token=buffer[ ((minToken + j - 1) % maxlen) + 1 ]
# if we find a matching "leaf" node then keep track of the ordering (ie, FNR from map.txt; lower order == higher precedence)
if ( token in tree [NodeNbr] && "leaf" in tree[NodeNbr][token] )
minOrd= ( tree[NodeNbr][token]["leaf"] < minOrd ) ? tree[NodeNbr][token]["leaf"] : minOrd
# if we find a matching "node" node then grab the next node to compare against the next token from buffer[]
if ( token in tree[NodeNbr] && "node" in tree[NodeNbr][token] ) {
NodeNbr=tree[NodeNbr][token]["node"]
continue
}
break # if we get here we have a token from buffer[] that does not match any of our replacement mappings so abort checking rest of buffer[]
}
if (minOrd < maxOrd) { # if we found at least one complete match (ie, hit a "leaf" node) then ...
print map[minOrd] # use the associated "ord"er to print the associated replacement string and ...
minToken=minToken + len[minOrd] # update the pointer into the buffer[] array
}
else { # otherwise we did not find a match so ...
print buffer[ ((minToken - 1) % maxlen) + 1 ] # print the first token from buffer[] and ...
minToken++ # update the pointer into the buffer[] array
}
if (minToken > maxToken)
break
}
}
BEGIN { root=maxNodeNbr=maxToken=0
minToken=1
maxOrd=9999999999
}
FNR==NR { split($0,a,",")
map[FNR]=a[2] # save replacement string for this input line from map.txt
n=split(a[1],b) # break our matching pattern into tokens
len[FNR]=n # make note of number of tokens in this line of input
maxlen=(n > maxlen) ? n : maxlen # keep track of longest series of tokens
NodeNbr=root # initiate our tree search
for (i=1 ; i<=n ; i++) { # loop through our list of tokens
token=b[i]
if (i==n) # if the last token for this line then create a "leaf" node and store the line number (aka "order")
tree[NodeNbr][token]["leaf"]=FNR
else
if ( tree[NodeNbr][token]["node"] ) # else if we already have a node at this point in the tree then grab its associated node number for the next level in the tree
NodeNbr=tree[NodeNbr][token]["node"]
else { # else create a new "node" node and populate with the next available node number
tree[NodeNbr][token]["node"]=++maxNodeNbr
NodeNbr=maxNodeNbr # use this as the next level in our tree traversal
}
}
maxrec=FNR # keep track of total number of replacement sets from map.txt (only used if we decide to print the contents of map[] to stdout
next
}
FNR==1 {
# Uncomment following to display the contents of the map[] array:
# for (i=1;i<=maxrec;i++)
# print "map:" i ":" map[i] ":"
#
# Uncomment following to display the contents of the tree[][][] array:
# fmt="%6s%8s%10s%10s%10s\n"
# fmt="%6s%8s%10s%10s%10s\n"
# printf "tree [Parent] [Token] [NodeType]\n\n"
# printf fmt, "Parent", "Token", "NodeType", "NodeVal", "MapTo"
# printf fmt, "======", "=====", "========", "=======", "====="
#
# for (NodeNbr=root ; NodeNbr<=maxNodeNbr ; NodeNbr++)
# for (token in tree[NodeNbr])
# for (NodeType in tree[NodeNbr][token]) { # ??
# NodeVal=tree[NodeNbr][token][NodeType]
# printf fmt, NodeNbr, token, NodeType, NodeVal, (NodeType=="leaf") ? "\"" map[NodeVal] "\"" : ""
# }
}
{ for (i=1 ; i<=NF ; i++) { # loop through tokens in current line from input.txt
maxToken++
buffer[ ((maxToken - 1) % maxlen) + 1 ] = $i
if ( (maxToken - minToken + 1) >= maxlen ) # if we have a "full" buffer then ...
replace() # look for replacement match
}
}
END { replace("flush") } # flush the rest of buffer[]
' map.txt input.txt
This generates:
X # "2 3 4" has precendence over "2 3"
112
&
9
12
\000
&
15
255
10
9
199
\000
13
If we switch the first 2 lines of map.txt like such:
==> map.txt <==
2 3,Y # "2 3" has precendence over ...
2 3 4,X # "2 3 4"
We now generate:
Y # "2 3" has precendence over "2 3 4" thus ...
4 # leaving "4" by itself
112
&
9
12
\000
&
15
255
10
9
199
\000
13
I have a data set: (file.txt)
X Y
1 a
2 b
3 c
10 d
11 e
12 f
15 g
20 h
25 i
30 j
35 k
40 l
41 m
42 n
43 o
46 p
I want to add two columns which are Up10 and Down10,
Up10: From (X) to (X-10) count of row.
Down10 : From (X) to (X+10)
count of row
For example:
X Y Up10 Down10
35 k 3 5
For Up10; 35-10 X=35 X=30 X=25 Total = 3 row
For Down10; 35+10 X=35 X=40 X=41 X=42 X=42 Total = 5 row
Desired Output:
X Y Up10 Down10
1 a 1 5
2 b 2 5
3 c 3 4
10 d 4 5
11 e 5 4
12 f 5 3
15 g 4 3
20 h 5 3
25 i 3 3
30 j 3 3
35 k 3 5
40 l 3 5
41 m 3 4
42 n 4 3
43 o 5 2
46 p 5 1
This is the Pierre François' solution: Thanks again #Pierre François
awk '
BEGIN{OFS="\t"; print "X\tY\tUp10\tDown10"}
(NR == FNR) && (FNR > 1){a[$1] = $1 + 0}
(NR > FNR) && (FNR > 1){
up = 0; upl = $1 - 10
down = 0; downl = $1 + 10
for (i in a) { i += 0 # tricky: convert i to integer
if ((i >= upl) && (i <= $1)) {up++}
if ((i >= $1) && (i <= downl)) {down++}
}
print $1, $2, up, down;
}
' file.txt file.txt > file-2.txt
But when i use this command for 13GB data, it takes too long.
I have used this way for 13GB data again:
awk 'BEGIN{ FS=OFS="\t" }
NR==FNR{a[NR]=$1;next} {x=y=FNR;while(--x in a&&$1-10<a[x]){} while(++y in a&&$1+10>a[y]){} print $0,FNR-x,y-FNR}
' file.txt file.txt > file-2.txt
When file-2.txt reaches 1.1GB it is frozen. I am waiting several hours, but i can not see finish of command and final output file.
Note: I am working on Gogole cloud. Machine type
e2-highmem-8 (8 vCPUs, 64 GB memory)
A single pass awk that keeps the sliding window of 10 last records and uses that to count the ups and downs. For symmetricy's sake there should be deletes in the END but I guess a few extra array elements in memory isn't gonna make a difference:
$ awk '
BEGIN {
FS=OFS="\t"
}
NR==1 {
print $1,$2,"Up10","Down10"
}
NR>1 {
a[NR]=$1
b[NR]=$2
for(i=NR-9;i<=NR;i++) {
if(a[i]>=a[NR]-10&&i>=2)
up[NR]++
if(a[i]<=a[NR-9]+10&&i>=2)
down[NR-9]++
}
}
NR>10 {
print a[NR-9],b[NR-9],up[NR-9],down[NR-9]
delete a[NR-9]
delete b[NR-9]
delete up[NR-9]
delete down[NR-9]
}
END {
for(nr=NR+1;nr<=NR+9;nr++) {
for(i=nr-9;i<=nr;i++)
if(a[i]<=a[nr-9]+10&&i>=2&&i<=NR)
down[nr-9]++
print a[nr-9],b[nr-9],up[nr-9],down[nr-9]
}
}' file
Output:
X Y Up10 Down10
1 a 1 5
2 b 2 5
...
35 k 3 5
...
43 o 5 2
46 p 5 1
Another single pass approach with a sliding window
awk '
NR == 1 { next } # skip the header
NR == 2 { min = max = cur = 1; X[cur] = $1; Y[cur] = $2; next }
{ X[++max] = $1; Y[max] = $2
if (X[cur] >= $1 - 10) next
for (; X[cur] + 10 < X[max]; ++cur) {
for (; X[min] < X[cur] - 10; ++min) {
delete X[min]
delete Y[min]
}
print X[cur], Y[cur], cur - min + 1, max - cur
}
}
END {
for (; cur <= max; ++cur) {
for (; X[min] < X[cur] - 10; ++min);
for (i = max; i > cur && X[cur] + 10 < X[i]; --i);
print X[cur], Y[cur], cur - min + 1, i - cur + 1
}
}
' file
The script assumes the X column is ordered numerically.
Hey I'm trying to find the distance between records in a text file. I'm trying to do it using awk.
An example input is:
1 2 1 4 yes
2 3 2 2 no
1 1 1 5 yes
4 2 4 0 no
5 1 0 1 no
I want to find the distance between each of the numerical values. I'm doing this by subtracting the values and then squaring the answer. I have tried the following code below but all the distances are simply 0. Any help would be appreciated.
BEGIN {recs = 0; fieldnum = 5;}
{
recs++;
for(i=1;i<=NF;i++) {data[recs,i] = $i;}
}
END {
for(r=1;r<=recs;r++) {
for(f=1;f<fieldnum;f++) {
##find distances
for(t=1;t<=recs;t++) {
distance[r,t]+=((data[r,f] - data[t,f])*(data[r,f] - data[t,f]));
}
}
}
for(r=1;r<=recs;r++) {
for(t=1;t<recs;t++) {
##print distances
printf("distance between %d and %d is %d \n",r,t,distance[r,t]);
}
}
}
No idea what you mean conceptually by the "distance between each of the numerical values" so I can't help you with your algorithm but let's clean up the code to see what that looks like:
$ cat tst.awk
{
for(i=1;i<=NF;i++) {
data[NR,i] = $i
}
}
END {
for(r=1;r<=NR;r++) {
for(f=1;f<NF;f++) {
##find distances
for(t=1;t<=NR;t++) {
delta = data[r,f] - data[t,f]
distance[r,t]+=(delta * delta)
}
}
}
for(r=1;r<=NR;r++) {
for(t=1;t<NR;t++) {
##print distances
printf "distance between %d and %d is %d\n",r,t,distance[r,t]
}
}
}
$
$ awk -f tst.awk file
distance between 1 and 1 is 0
distance between 1 and 2 is 7
distance between 1 and 3 is 2
distance between 1 and 4 is 34
distance between 2 and 1 is 7
distance between 2 and 2 is 0
distance between 2 and 3 is 15
distance between 2 and 4 is 13
distance between 3 and 1 is 2
distance between 3 and 2 is 15
distance between 3 and 3 is 0
distance between 3 and 4 is 44
distance between 4 and 1 is 34
distance between 4 and 2 is 13
distance between 4 and 3 is 44
distance between 4 and 4 is 0
distance between 5 and 1 is 27
distance between 5 and 2 is 18
distance between 5 and 3 is 33
distance between 5 and 4 is 19
Seems to produce some non-zero output....
I have a file like
Sever Name aad98722RHEL 20120630 075022
CPU
1 sec 10 sec 15 sec 1 min 1 hour
5 8 0 1 19
TX kbits/sec:
interface 10 sec 1 min 10 min 1 hour 1 day
--------- ------ ----- ------ ------ -----
eth0 32 33 39 40 33
eth1 6 186 321 199 18
eth2 0 0 0 0 0
mgt0 0 0 0 0 0
RX kbits/sec:
interface 10 sec 1 min 10 min 1 hour 1 day
--------- ------ ----- ------ ------ -----
eth0 19 19 25 26 23
eth1 9 26 40 28 10
eth2 0 0 0 0 0
mgt0 0 0 0 0 0
Total memory usage: 1412916 kB
Resident set size : 1256360 kB
Heap usage : 1368212 kB
Stack usage : 84 kB
Library size : 16316 kB
What I would like to produce is
aad98722RHE 20120630 075022 CPU 5 8 0 1 19
aad98722RHE 20120630 075022 TX kbits/sec: 32 33 39 40 33 6 186 321 199 18 0 0 0 0 0 0 0 0 0 0
aad98722RHE 20120630 075022 RX kbits/sec: 19 19 25 26 23 9 26 40 28 10 0 0 0 0 0 0 0 0 0 0
aad98722RHE 20120630 075022 Total memory usage: 1412916 kB Resident set size : 1256360 kB Heap usage : 1368212 kB Stack usage : 84 kB Library size : 16316 kB
Can this be done in Awk/Sed and how?
Perhaps it not better solution, but it work.
file: a.awk:
function print_cpu( server_name, cpu )
{
while ( $0 !~ cpu )
{
getline
}
getline
getline
printf "%s %s ", server_name, cpu
for ( i = 1; i < NF + 1; i++ )
{
printf "%s ", $i
}
printf "\n"
}
function print_rx_or_tx( server_name, rx_or_tx )
{
while ( $0 !~ rx_or_tx )
{
getline
}
getline
getline
getline
printf "%s %s ", server_name, rx_or_tx
while ( $0 != "" )
{
getline
for ( i = 2; i < NF; i++ )
{
printf "%s ", $i
}
}
printf "\n"
}
function print_stuff( server_name )
{
while ( $0 == "" )
{
getline
}
printf "%s ", server_name
while ( $0 != "" )
{
printf "%s ", $0
if ( getline <= 0 )
{
break
}
}
printf "\n"
}
BEGIN { server = "Server Name"; cpu = "CPU"; tx = "TX kbits/sec:"; rx = "RX kbits/sec:" }
server { server_name = $3 " " $4 " " $5 }
! server
{
print_cpu( server_name, cpu )
print_rx_or_tx( server_name, tx )
print_rx_or_tx( server_name, rx )
print_stuff( server_name )
}
run: awk -f a.awk your_input_file
One way using perl:
Assuming infile has content of your question and next content in script.pl:
use warnings;
use strict;
my ($header, $newlines, $trans, #nums);
## Read input in paragraph mode.
local $/ = qq||;
while ( my $par = <> ) {
chomp $par;
## Save data of the header.
if ( $. == 1 ) {
my #header = $par =~ m/\ASer?ver\s+Name\s+(\S+)\s+(\S+)\s+(\S+)\s*\Z/s;
last unless #header;
$header = join qq| |, #header;
next;
}
## Number of '\n' in each paragraph (number of lines minus one).
$newlines = $par =~ tr/\n/\n/;
## Three lines, the CPU info. Extract what I need and print.
if ( $newlines == 2 ) {
printf qq|%s %s %s\n|, $header, $par =~ m/\A([^\n]+).*\n([^\n]+)\Z/s;
next;
}
## Transmission string.
if ( $newlines == 0 ) {
$trans = $par;
next;
}
## Transmission info. Extract numbers and print.
if ( $newlines == 5 ) {
my #lines = split /\n/, $par;
for my $i ( 0 .. $#lines ) {
my #f = split /\s+/, $lines[ $i ];
if ( grep { m/\D/ } #f[ 1 .. $#f ] ) {
next;
}
else {
push #nums, #f[ 1 .. $#f ];
}
}
printf qq|%s %s\n|, $header, join qq| |, #nums;
#nums = ();
}
## Resume info. Extract and print.
if ( $newlines == 4 ) {
$par =~ s/\n/\t/gs;
printf qq|%s %s\n|, $header, $par;
}
}
Run it like:
perl script.pl infile
With following output:
aad98722RHEL 20120630 075022 CPU 5 8 0 1 19
aad98722RHEL 20120630 075022 32 33 39 40 33 6 186 321 199 18 0 0 0 0 0 0 0 0 0 0
aad98722RHEL 20120630 075022 19 19 25 26 23 9 26 40 28 10 0 0 0 0 0 0 0 0 0 0
aad98722RHEL 20120630 075022 Total memory usage: 1412916 kB Resident set size : 1256360 kB Heap usage : 1368212 kB Stack usage : 84 kB Library size : 16316 kB
I have this kind of records (rows):
0 1 4 8 2 3 7 9 3 4 8 9 4 7 9 1 0 0 2 5 8 2 4 5 6 1 0 2 4 8 9 0
Definitions:
group: collection of numbers which are separated by 0-s (zeros)
sub-group: collection of numbers which are separated by local minima in the groups
local minimum: the numbers before and after it are greater
In the above example there are 3 groups and 7 sub-groups, i.e.
groups: 1 4 8 2 3 7 9 3 4 8 9 4 7 9 1 , 2 5 8 2 4 5 6 1 , 2 4 8 9
sub-groups: 1 4 8 , 3 7 9 , 4 8 9 , 7 9 1 , 2 5 8 , 4 5 6 1 , 2 4 8 9 (this last is identical to the group itself)
So, in these kind of records I have to
find the minima (print out: 2, 3, 4, 2)
the size (number of character) of these sub-groups
positions of numbers of the sub-groups in the groups
I have already started to write something, but I am stuck here...
Can anyone help me to solve this?
Here is the code so far:
#!/usr/bin/awk -f
{
db = split($0,a,/( 0)+ */)
for (i=1; i<=db; i++) {
split_at_max(a[i])
for (j=1; j<=ret_count; j++) {
print ""
for (k=1; k<=maximums[j]; k++) {
print ret[j,k]
}
}
}
}
function split_at_max(x) {
m_db = split(x,values," ")
for (mx in ret) {
delete ret[mx]
}
ret_count = 1
ret_curr_db = 0
for (mi=2; mi<m_db; mi++) {
ret_curr_db++
ret[ret_count,ret_curr_db] = values[mi-1]
if ( (values[mi-1] <= values[mi]) &&
(values[mi] >= values[mi+1]) &&
(values[mi+1] <= values[mi+2]) ) {
maximums[ret_count] = ret_curr_db
ret_count++
ret_curr_db = 0
}
}
ret_curr_db++
ret[ret_count,ret_curr_db] = values[mi-1]
ret_curr_db++
ret[ret_count,ret_curr_db] = values[mi]
maximums[ret_count] = ret_curr_db
}
interesting assignment.
wrote a quick and dirty awk script. there should be a lot of room to optimize. I don't know what kind of output are you expecting...
awk -v RS="0" 'NF>1{
delete g;
print "group:";
for(i=1;i<=NF;i++){
printf $i" ";
g[i]=$i
}
print "";
t=1;
delete m;
for(i=2;i<length(g);i++){
if(g[i-1]>g[i] && g[i]<g[i+1]) {
print "found minima:"g[i]
m[t]=i;
t++;
}
}
if(length(m)>0){
s=0;
for(x=1;x<=length(m);x++){
printf "sub-group: "
for(i=s+1;i<m[x];i++){
printf g[i]" "
s=m[x];
}
print "";
if(x+1>length(m)){
printf "sub-group: ";
for(i=s+1;i<=length(g);i++)
printf g[i]" "
print "";
}
}
}else{
print "no minima found. sub-group is the same as group:"
printf "sub-group: "
for(i=1;i<=NF;i++){
printf $i" ";
g[i]=$i
}
}
print "\n-----------------------------"
} yourFile
the output on your example input:
group:
1 4 8 2 3 7 9 3 4 8 9 4 7 9 1
found minima:2
found minima:3
found minima:4
sub-group: 1 4 8
sub-group: 3 7 9
sub-group: 4 8 9
sub-group: 7 9 1
-----------------------------
group:
2 5 8 2 4 5 6 1
found minima:2
sub-group: 2 5 8
sub-group: 4 5 6 1
-----------------------------
group:
2 4 8 9
no minima found. sub-group is the same as group:
sub-group: 2 4 8 9
-----------------------------
update
fixing for those "special" elements like 20,30,40...
still quick and dirty:
change my awk script above to
sed 's/^0$//g' yourFile | awk -v RS="" [following codes are the same as above]......
then the output is:
group:
6 63 81 31 37 44 20
found minima:31
sub-group: 6 63 81
sub-group: 37 44 20
-----------------------------