Generating Leads and lags for non-consecutive time periods in SAS - sql

With a SAS dataset like
Ob x year pid grp
1 3.88 2001 1 a
2 2.88 2002 1 a
3 0.13 2004 1 a
4 3.70 2005 1 a
5 1.30 2007 1 a
6 0.95 2001 2 b
7 1.79 2002 2 b
8 1.59 2004 2 b
9 1.29 2005 2 b
10 0.96 2007 2 b
I would like to get
Ob x year pid grp grp X_F1 XL1
1 3.88 2001 1 a a 2.88 .
2 2.88 2002 1 a a . 3.88
3 0.13 2004 1 a a 3.7 .
4 3.7 2005 1 a a . 0.13
5 1.3 2007 1 a a . .
6 0.95 2001 2 b b 1.79 .
7 1.79 2002 2 b b . 0.95
8 1.59 2004 2 b b 1.29 .
9 1.29 2005 2 b b . 1.59
10 0.96 2007 2 b b . .
where for observations with the same pid and each year t,
x_F1 is the value of x in year t+1 and
x_L1 is the value of x in year t-1
In my data set, not all pids have observations in successive years.
My attempt using the expand proc
proc expand data=have out=want method=none;
by pid; id year;
convert x = x_F1 / transformout=(lead 1);
convert x = x_F2 / transformout=(lead 2);
convert x = x_F3 / transformout=(lead 3);
convert x = x_L1 / transformout=(lag 1);
convert x = x_L2 / transformout=(lag 2);
convert x = x_L3 / transformout=(lag 3);
run;
did not account for the fact that years are not consecutive.

You could stick with proc expand to insert the missing years into your data (utilising the extrapolate statement). I've set the from value to day as this is a sequential integer check for days which will work with your data as YEAR is stored as an integer rather than a date.
Like the other answers, it requires 2 passes of the data, but I don't think there's an alternative to this.
data have;
input x year pid grp $;
datalines;
3.88 2001 1 a
2.88 2002 1 a
0.13 2004 1 a
3.70 2005 1 a
1.30 2007 1 a
0.95 2001 2 b
1.79 2002 2 b
1.59 2004 2 b
1.29 2005 2 b
0.96 2007 2 b
;
run;
proc expand data = have out = have1
method=none extrapolate
from=day to=day;
by pid;
id year;
run;
proc expand data=have1 out=want method=none;
by pid; id year;
convert x = x_F1 / transformout=(lead 1);
convert x = x_F2 / transformout=(lead 2);
convert x = x_F3 / transformout=(lead 3);
convert x = x_L1 / transformout=(lag 1);
convert x = x_L2 / transformout=(lag 2);
convert x = x_L3 / transformout=(lag 3);
run;
or this can be done in one go, subject to whether the value of x is important in the final dataset (see comment below).
proc expand data=have1 out=want1 method=none extrapolate from=day to=day;
by pid; id year;
convert x = x_F1 / transformout=(lead 1);
convert x = x_F2 / transformout=(lead 2);
convert x = x_F3 / transformout=(lead 3);
convert x = x_L1 / transformout=(lag 1);
convert x = x_L2 / transformout=(lag 2);
convert x = x_L3 / transformout=(lag 3);
run;

Here is a simple approach using proc sql. It joins the data with itself twice; once for the forward and once for the backward lag, then takes the required values where they exist.
proc sql;
create table want as
select
a.*,
b.x as x_f1,
c.x as x_l1
from have as a
left join have as b
on a.pid = b.pid and a.year = b.year - 1
left join have as c
on a.pid = c.pid and a.year = c.year + 1
order by
a.pid,
a.year;
run;
Caveats:
It will not expand too well with larger numbers of lags.
This is probably not the quickest approach.
It requires that there be only one observation for each pid year pair, and would need modifying if this is not the case.

Sort your data per group and per year.
compute x_F1 in a data step with a lag and a condition like this: if (year and lag(year) are consecutive) then x_F1=lag(x)
Sort your date the other way around
Compute x_L1 similarly.
I'm trying to write you a working code right now.
If you provide me with a data sample (a data step with an infile e.g.), I can better try and test it.
This seems to work with my data:
/*1*/
proc sort data=WORK.QUERY_FOR_EPILABO_CLEAN_NODUP out=test1(where=(year<>1996)) nodupkey;
by grp year;
run;
quit;
/*2*/
data test2;
*retain x;
set test1;
by grp;
x_L1=lag(x);
if first.grp then
x_L1=.;
yeardif=dif(year);
if (yeardif ne 1) then
x_L1=.;
run;
/*3*/
proc sort data=test2(drop=yeardif) out=test3;
by grp descending year;
run;
quit;
/*4*/
data test4;
*retain x;
set test3;
by grp;
x_F1=lag(x);
if first.grp then
x_F1=.;
yeardif=dif(year);
if (yeardif ne -1) then
x_F1=.;
run;

Related

SAS sum observations not in a group, by multiple groups

This post follow this one: SAS sum observations not in a group, by group
Where my minimal example was a bit too minimal sadly,I wasn't able to use it on my data.
Here is a complete case example, what I have is :
data have;
input group1 group2 group3 $ value;
datalines;
1 A X 2
1 A X 4
1 A Y 1
1 A Y 3
1 B Z 2
1 B Z 1
1 C Y 1
1 C Y 6
1 C Z 7
2 A Z 3
2 A Z 9
2 A Y 2
2 B X 8
2 B X 5
2 B X 5
2 B Z 7
2 C Y 2
2 C X 1
;
run;
For each group, I want a new variable "sum" with the sum of all values in the column for the same sub groups (group1 and group2), exept for the group (group3) the observation is in.
data want;
input group1 group2 group3 $ value $ sum;
datalines;
1 A X 2 8
1 A X 4 6
1 A Y 1 9
1 A Y 3 7
1 B Z 2 1
1 B Z 1 2
1 C Y 1 13
1 C Y 6 8
1 C Z 7 7
2 A Z 3 11
2 A Z 9 5
2 A Y 2 12
2 B X 8 17
2 B X 5 20
2 B X 5 20
2 B Z 7 18
2 C Y 2 1
2 C X 1 2
;
run;
My goal is to use either datasteps or proc sql (doing it on around 30 millions observations and proc means and such in SAS seems slower than those on previous similar computations).
My issue with solutions provided in the linked post is that is uses the total value of the column and I don't know how to change this by using the total in the sub group.
Any idea please?
A SQL solution will join all data to an aggregating select:
proc sql;
create table want as
select have.group1, have.group2, have.group3, have.value
, aggregate.sum - value as sum
from
have
join
(select group1, group2, sum(value) as sum
from have
group by group1, group2
) aggregate
on
aggregate.group1 = have.group1
& aggregate.group2 = have.group2
;
SQL can be slower than hash solution, but SQL code is understood by more people than those that understand SAS DATA Step involving hashes ( which can be faster the SQL. )
data want2;
if 0 then set have; * prep pdv;
declare hash sums (suminc:'value');
sums.defineKey('group1', 'group2');
sums.defineDone();
do while (not hash_loaded);
set have end=hash_loaded;
sums.ref(); * adds value to internal sum of hash data record;
end;
do while (not last_have);
set have end=last_have;
sums.sum(sum:sum); * retrieve group sum.;
sum = sum - value; * subtract from group sum;
output;
end;
stop;
run;
SAS documentation touches on SUMINC and has some examples
The question does not address this concept:
For each row compute the tier 2 sum that excludes the tier 3 this row is in
A hash based solution would require tracking each two level and three level sums:
data want2;
if 0 then set have; * prep pdv;
declare hash T2 (suminc:'value'); * hash for two (T)iers;
T2.defineKey('group1', 'group2'); * one hash record per combination of group1, group2;
T2.defineDone();
declare hash T3 (suminc:'value'); * hash for three (T)iers;
T3.defineKey('group1', 'group2', 'group3'); * one hash record per combination of group1, group2, group3;
T3.defineDone();
do while (not hash_loaded);
set have end=hash_loaded;
T2.ref(); * adds value to internal sum of hash data record;
T3.ref();
end;
T2_cardinality = T2.num_items;
T3_cardinality = T3.num_items;
put 'NOTE: |T2| = ' T2_cardinality;
put 'NOTE: |T3| = ' T3_cardinality;
do while (not last_have);
set have end=last_have;
T2.sum(sum:t2_sum);
T3.sum(sum:t3_sum);
sum = t2_sum - t3_sum;
output;
end;
stop;
drop t2_: t3:;
run;

SAS sum observations not in a group, by group

I have a data set :
data have;
input group $ value;
datalines;
A 4
A 3
A 2
A 1
B 1
C 1
D 2
D 1
E 1
F 1
G 2
G 1
H 1
;
run;
The first variable is a group identifier, the second a value.
For each group, I want a new variable "sum" with the sum of all values in the column, exept for the group the observation is in.
My issue is having to do that on nearly 30 millions of observations, so efficiency matters.
I found that using data step was more efficient than using procs.
The final database should looks like :
data want;
input group $ value $ sum;
datalines;
A 4 11
A 3 11
A 2 11
A 1 11
B 1 20
C 1 20
D 2 18
D 1 18
E 1 20
F 1 20
G 2 18
G 1 20
H 1 20
;
run;
Any idea how to perform this please?
Edit: I don't know if this matter but the example I gave is a simplified version of my issue. In the real case, I have 2 other group variable, thus taking the sum of the whole column and substract the sum in the group is not a viable solution.
The requirement
sum of all values in the column, except for the group the observation is in
indicates two passes of the data must occur:
Compute the all_sum and each group's group_sumA hash can store each group's sum -- computed via a specified suminc: variable and .ref() method invocation. A variable can accumulate allsum.
Compute allsum - group_sum for each row of a group.The group_sum is retrieved from hash and subtracted from allsum.
Example:
data want;
if 0 then set have; * prep pdv;
declare hash sums (suminc:'value');
sums.defineKey('group');
sums.defineDone();
do while (not hash_loaded);
set have end=hash_loaded;
sums.ref(); * adds value to internal sum of hash data record;
allsum + value;
end;
do while (not last_have);
set have end=last_have;
sums.sum(sum:sum); * retrieve groups sum. Do you hear the Dragnet theme too?;
sum = allsum - sum; * subtract from allsum;
output;
end;
stop;
run;
What is wrong with a straight forward approach? You need to make two passes no matter what you do.
Like this. I included extra variables so you can see how the values are derived.
proc sql ;
create table want as
select a.*,b.grand,sum(value) as total, b.grand - sum(value) as sum
from have a
, (select sum(value) as grand from have) b
group by a.group
;
quit;
Results:
Obs group value grand total sum
1 A 3 21 10 11
2 A 1 21 10 11
3 A 2 21 10 11
4 A 4 21 10 11
5 B 1 21 1 20
6 C 1 21 1 20
7 D 2 21 3 18
8 D 1 21 3 18
9 E 1 21 1 20
10 F 1 21 1 20
11 G 1 21 3 18
12 G 2 21 3 18
13 H 1 21 1 20
Note it does not matter what you have as your GROUP BY clause.
Do you really need to output all of the original observations? Why not just output the summary table?
proc sql ;
create table want as
select a.group, b.grand - sum(value) as sum
from have a
, (select sum(value) as grand from have) b
group by a.group
;
quit;
Results
Obs group total sum
1 A 10 11
2 B 1 20
3 C 1 20
4 D 3 18
5 E 1 20
6 F 1 20
7 G 3 18
8 H 1 20
I would break this out into two different segments:
1.) You could start by using PROC SQL to get the sums by the group
2.) Then use some IF/THEN statements to reassign the values by group

Restructuring complicated and large dataframe

I have a large dataframe with 41040 obs. and 20 variables.
Here I will simplify the mock data set so it's easier to understand the question.
What I have:
rm(list = ls())
variable <- rep(c('var1', 'var1_2', 'var1_3', 'var1_4'), 5)
group <- as.factor(rep(c('county1', 'county2', 'county3', 'county4'), 5))
year <- rep(c(2000:2004), 4)
month <- c(rep(1:12, 1), 1:8)
value1 <- sample(1:10000, 20)
value2 <- sample(1:10000, 20)
value3 <- sample(1:10000, 20)
mydata <- data.frame(variable, group, year, month, value1, value2, value3)
head(mydata)
variable group year month value1 value2 value3
1 var1 county1 2000 1 4848 4759 6029
2 var1_2 county2 2001 2 7624 3486 6745
3 var1_3 county3 2002 3 4612 9155 4266
4 var1_4 county4 2003 4 1496 2420 9451
5 var1 county1 2004 5 6739 4312 5577
6 var1_2 county2 2000 6 5127 5030 5479
What i want from this, is get another data.frame where values won't be messed up across counties, years or months, but each column will represent one variable from the variable column. To clarify, on the same example I am looking for the quickest way to get this:
var1 <- c(t(mydata[1, 5:7]))
var1_2 <- c(t(mydata[2, 5:7]))
var1_3 <- c(t(mydata[3, 5:7]))
var1_4 <- c(t(mydata[4, 5:7]))
group2 <- rep('county1', 3)
year2 <- rep(2000, 3)
month2 <- rep(1, 3)
mydata2 <- data.frame(group2, year2, month2, var1, var1_2, var1_3, var1_4)
head(mydata2)
group2 year2 month2 var1 var1_2 var1_3 var1_4
county1 2000 1 4848 7624 4612 1496
county1 2000 1 4759 3486 9155 2420
county1 2000 1 6029 6745 4266 9451
After all values for county1, year 2000 and month 1 are written, I want it to go to month 2, year 2000 and county1, than month 3 etc. After all months are done I want year 2001 for county 1 etc, and in the end moving to county2.
I tried various ways with melt(), dcast(), stack(), unstack(), gather() and spread() with no success.
I did it, not super-elegantly though. I just divided the original data.frame into new data.frames with selecting the first 4 variables and than alternating the following variables which needed to be cast. Like this:
res <- select(mydata, c(1:4, 5)) # i changed this 5 to 6, than to 7 etc.
base <- dcast(res, group + year + month ~ variable, value.var = 'value1')
after I did this for each column, I used cbind to create a new, casted dataframe:
cbind(base, var1_2[ , 5:14], var1_3[ , 6:14])
It works, although I would still like to see a nicer way to do this automatically in one or two lines.

SAS hierarchical structure sum

I have a dataset with a hierarchical codelist variable.
The logics of hierarchy is determined by the LEVEL variable and the prefix structure of the CODE character variable.
There are 6 (code length from 1 to 6) "aggregate" levels and the terminal level (code length of 10 characters).
I need to update the nodes variable (count of terminal nodes - the aggregate levels do not count in the "higher" aggregates, only the terminal nodes) - so the sum of counts in one level, for example every level 5's total count is the same as every level 6's.
And I need to calculate (sum up) the weight to "higher" level nodes.
NOTE: I offset the output table's NODES and WEIGHT variable so you can see better what I am talking about (just add up the numbers in each offset and you get the same value).
EDIT1: there can be multiple observations with the same code. A unique observations is a combination of 3 variables code + var1 + var2.
Input table:
ID level code var1 var2 nodes weight myIndex
1 1 1 . . 999 999 999
2 2 11 . . 999 999 999
3 3 111 . . 999 999 999
4 4 1111 . . 999 999 999
5 5 11111 . . 999 999 999
6 6 111111 . . 999 999 999
7 10 1111119999 01 1 1 0.1 105,5
8 10 1111119999 01 2 1 0.1 109,1
9 6 111112 . . 999 999 999
10 10 1111120000 01 1 1 0.5 95,0
11 5 11119 . . 999 999 999
12 6 111190 . . 999 999 999
13 10 1111901000 01 1 1 0.1 80,7
14 10 1111901000 02 1 1 0.2 105,5
Desired output table:
ID level code var1 var2 nodes weight myIndex
1 1 1 . . 5 1.0 98,1
2 2 11 . . 5 1.0 98,1
3 3 111 . . 5 1.0 98,1
4 4 1111 . . 5 1.0 98,1
5 5 11111 . . 3 0.7 98,5
6 6 111111 . . 2 0.2 107,3
7 10 1111119999 01 1 1 0.1 105,5
8 10 1111119999 01 2 1 0.1 109,1
9 6 111112 . . 1 0.5 95,0
10 10 1111120000 01 1 1 0.5 95,0
11 5 11119 . . 2 0.3 97,2
12 6 111190 . . 2 0.3 97,2
13 10 1111901000 01 1 1 0.1 80,7
14 10 1111901000 02 1 1 0.2 105,5
And here's the code I came up with. It works just like I wanted, but man, it is really slow. I need something way faster, because this is a part of a webservice which has to run "instantly" on request.
Any suggestions on speeding up the code, or any other solutions are welcome.
%macro doit;
data temporary;
set have;
run;
%do i=6 %to 2 %by -1;
%if &i = 6 %then %let x = 10;
%else %let x = (&i+1);
proc sql noprint;
select count(code)
into :cc trimmed
from have
where level = &i;
select code
into :id1 - :id&cc
from have
where level = &i;
quit;
%do j=1 %to &cc.;
%let idd = &&id&j;
proc sql;
update have t1
set nodes = (
select sum(nodes)
from temporary t2
where t2.level = &x and t2.code like ("&idd" || "%")),
set weight = (
select sum(weight)
from temporary t2
where t2.level = &x and t2.code like ("&idd" || "%"))
where (t1.level = &i and t1.code like "&idd");
quit;
%end;
%end;
%mend doit;
Current code based on #Quentin's solution:
data have;
input ID level code : $10. nodes weight myIndex;
cards;
1 1 1 . . .
2 2 11 . . .
3 3 111 . . .
4 4 1111 . . .
5 5 11111 . . .
6 6 111111 . . .
7 10 1111110000 1 0.1 105.5
8 10 1111119999 1 0.1 109.1
9 6 111112 . . .
10 10 1111129999 1 0.5 95.0
11 5 11119 . . .
12 6 111190 . . .
13 10 1111900000 1 0.1 80.7
14 10 1111901000 1 0.2 105.5
;
data want (drop=_:);
*hash table of terminal nodes;
if (_n_ = 1) then do;
if (0) then set have (rename=(code=_code weight=_weight));
declare hash h(dataset:'have(where=(level=10) rename=(code=_code weight=_weight myIndex=_myIndex))');
declare hiter iter('h');
h.definekey('ID');
h.definedata('_code','_weight','_myIndex');
h.definedone();
end;
set have;
*for each non-terminal node, iterate through;
*hash table of all terminal nodes, looking for children;
if level ne 10 then do;
call missing(weight, nodes, myIndex);
do _n_ = iter.first() by 0 while (_n_ = 0);
if trim(code) =: _code then do;
weight=sum(weight,_weight);
nodes=sum(nodes,1);
myIndex=sum(myIndex,_myIndex*_weight);
end;
_n_ = iter.next();
end;
myIndex=round(myIndex/weight,.1);
end;
output;
run;
Here's an alternative hash approach.
Rather than using a hash object to do a cartesian join, this adds the nodes & weight from each level 10 node to each of the 6 applicable parent nodes as it goes along. This may be marginally faster than Quentin's approach as there are no redundant hash lookups.
It takes a bit longer than Quentin's approach when constructing the hash object, and uses a bit more memory, as each terminal node is added 6 times with different keys and existing entries often have to be updated, but afterwards each parent node only has to look up its own individual stats, rather than looping through all the terminal nodes, which is a substantial saving.
Weighted stats are possible as well, but you have to update both loops, not just the second one.
data want;
if 0 then set have;
dcl hash h();
h.definekey('code');
h.definedata('nodes','weight','myIndex');
h.definedone();
length t_code $10;
do until(eof);
set have(where = (level = 10)) end = eof;
t_nodes = nodes;
t_weight = weight;
t_myindex = weight * myIndex;
do _n_ = 1 to 6;
t_code = substr(code,1,_n_);
if h.find(key:t_code) ne 0 then h.add(key:t_code,data:t_nodes,data:t_weight,data:t_myIndex);
else do;
nodes + t_nodes;
weight + t_weight;
myIndex + t_myIndex;
h.replace(key:t_code,data:nodes,data:weight,data:MyIndex);
end;
end;
end;
do until(eof2);
set have end = eof2;
if level ne 10 then do;
h.find();
myIndex = round(MyIndex / Weight,0.1);
end;
output;
end;
drop t_:;
run;
Below is a brute-force hash approach to doing a similar Cartesian product as in the SQL. Load a hash table of the terminal nodes. Then read through the dataset of nodes, and for each node that is not a terminal node, iterate through the hash table, identifying all of the child terminal nodes.
I think the approach #joop is describing may be more efficient, as this approach doesn't take advantage of the tree structure. So there is a lot of re-calculating. With 5000 records and 3000 terminal nodes, this would do 2000*3000 comparisons. But might not be that slow since the hash table is in memory, so you're not going to have excessive I/O ....
data want (drop=_:);
*hash table of terminal nodes;
if (_n_ = 1) then do;
if (0) then set have (rename=(code=_code weight=_weight));
declare hash h(dataset:'have(where=(level=10) rename=(code=_code weight=_weight))');
declare hiter iter('h');
h.definekey('ID');
h.definedata('_code','_weight');
h.definedone();
end;
set have;
*for each non-terminal node, iterate through;
*hash table of all terminal nodes, looking for children;
if level ne 10 then do;
call missing(weight, nodes);
do _n_ = iter.first() by 0 while (_n_ = 0);
if trim(code) =: _code then do;
weight=sum(weight,_weight);
nodes=sum(nodes,1);
end;
_n_ = iter.next();
end;
end;
output;
run;
It seems pretty simple. Just join back with itself and count/sum.
proc sql ;
create table want as
select a.id, a.level, a.code , a.var1, a.var2
, count(b.id) as nodes
, sum(b.weight) as weight
from have a
left join have b
on a.code eqt b.code
and b.level=10
group by 1,2,3,4,5
order by 1
;
quit;
If you don't want to use the EQT operator then you can use the SUBSTR() function instead.
on a.code = substr(b.code,1,a.level)
and b.level=10
Since you're using SAS, how about using proc summary to do the heavy lifting here? No cartesian joins required!
One advantage of this option over the some of the others is that it's a bit easier to generalise if you want to calculate lots of more complex statistics for multiple variables.
data have;
input ID level code : $10. nodes weight myIndex;
format myIndex 5.1;
cards;
1 1 1 . . .
2 2 11 . . .
3 3 111 . . .
4 4 1111 . . .
5 5 11111 . . .
6 6 111111 . . .
7 10 1111110000 1 0.1 105.5
8 10 1111119999 1 0.1 109.1
9 6 111112 . . .
10 10 1111129999 1 0.5 95.0
11 5 11119 . . .
12 6 111190 . . .
13 10 1111900000 1 0.1 80.7
14 10 1111901000 1 0.2 105.5
;
run;
data v_have /view = v_have;
set have(where = (level = 10));
array lvl[6] $6;
do i = 1 to 6;
lvl[i]=substr(code,1,i);
end;
drop i;
run;
proc summary data = v_have;
class lvl1-lvl6;
var nodes weight;
var myIndex /weight = weight;
ways 1;
output out = summary(drop = _:) sum(nodes weight)= mean(myIndex)=;
run;
data v_summary /view = v_summary;
set summary;
length code $10;
code = cats(of lvl:);
drop lvl:;
run;
data have;
modify have v_summary;
by code;
replace;
run;
In theory a hash of hashes might also be an appropriate data structure, but that would be extremely complicated for a relatively small benefit. I might have a go anyway just as a learning exercise...
One approach (I think) would be to make the Cartesian product, and find all of the terminal nodes that are a "match" to each of the nodes, then sum the weights.
Something like:
data have;
input ID level code : $10. nodes weight ;
cards;
1 1 1 . .
2 2 11 . .
3 3 111 . .
4 4 1111 . .
5 5 11111 . .
6 6 111111 . .
7 10 1111110000 1 0.1
8 10 1111119999 1 0.1
9 6 111112 . .
10 10 1111129999 1 0.5
11 5 11119 . .
12 6 111190 . .
13 10 1111900000 1 0.1
14 10 1111901000 1 0.2
;
proc sql;
select min(id) as id
, min(level) as level
, a.code
, count(b.weight) as nodes /*count of terminal nodes*/
, sum(b.weight) as weight /*sum of weights of terminal nodes*/
from
have as a
,(select code , weight
from have
where level=10 /*selects terminal nodes*/
) as b
where a.code eqt b.code /*EQT is equivalent to =: */
group by a.code
;
quit;
I'm not sure that is correct, but it gives the desired results for the sample data.
This is the SQL needed to estimate the parent record for every record. It only uses string functions (position and length) so it should be adaptable to any dialect of SQL, maybe even SAS. (the CTE might need to be rewritten to subqueries or a view) The idea is to:
add a parent_id field to the dataset
find the record with the longest substring of code
and use its id as the value for our parent_id
(after that) update the records from the sum(nodes),sum(weight) of their direct children (the ones with child.parent_id = this.id )
BTW: I could have used the LEVEL instead of the LENGTH(code) ; the data is a bit redundant in this aspect.
WITH sub AS (
SELECT id, length(code) AS len
, code
FROM tree)
UPDATE tree t
SET parent_id = s.id
FROM sub s
WHERE length(t.code) > s.len AND POSITION (s.code IN t.code) = 1
AND NOT EXISTS (
SELECT *
FROM sub nx
WHERE nx.len > s.len AND POSITION (nx.code IN t.code ) = 1
AND nx.len < length(t.code) AND POSITION (nx.code IN t.code ) = 1
)
;
SELECT * FROM tree
ORDER BY parent_id DESC NULLS LAST
, id
;
After finding the parents, the whole table should be updated (repeatedly) from itself
like:
-- PREPARE omg( integer) AS
UPDATE tree t
SET nodes = s.nodes , weight = s.weight
FROM ( SELECT parent_id , SUM(nodes) AS nodes , SUM(weight) AS weight
FROM tree GROUP BY parent_id) s
WHERE s.parent_id = t.id
;
In SAS, this could probably be done by sorting on {0-parent_id, id} and do some retain+summation magic. (my SAS is a bit rusty in this area)
UPDATE: if only the leaf nodes have non-NULL (non-missing) values for {nodes, weight}, the aggregation can be done in one sweep for the entire tree, without first computing the parent_ids:
UPDATE tree t
SET nodes = s.nodes , weight = s.weight
FROM ( SELECT p.id , SUM(c.nodes) AS nodes , SUM(c.weight) AS weight
FROM tree p
JOIN tree c ON c.lev > p.lev AND POSITION (p.code IN c.code ) = 1
GROUP BY p.id
) s
WHERE s.id = t.id
;
An index on {lev,code} will probably speed up things. (assuming an index on id)

SAS: prof freq list view, creating dummy

is there any way to create dummy variables for the list view generated from SAS: proc freq?
e.g.
this is my proc freq output :
x y z N %
0 0 0 10 2.8
0 0 1 20 5.6
0 1 0 30 8.3
0 1 1 40 11.1
1 0 0 50 13.9
1 0 1 60 16.7
1 1 0 70 19.4
1 1 1 80 22.2
can I create (easily in proc freq) dummy variables that can have 1/0 values for each level of the output (that is, 8 dummy variables) OR alternatively, a single variable which will have incremental value of 1,2,3,... for each level of output???
Thanks in advance !!
Here's one way you can do it with a single variable, assuming you just have combinations of variables with values of only 0 or 1:
data yourdata;
do i = 1 to 100;
x = round(ranuni(1));
y = round(ranuni(2));
z = round(ranuni(3));
t = 1;
output;
end;
run;
proc summary nway data = yourdata;
class x y z;
var t;
output out = summary_ds n=;
run;
data summary_ds;
set summary_ds;
singlevar = input(cats(x,y,z),binary3.);
run;