I have SAS data set which has 2 columns
Var1 Var2
A B
B C
C D
D E
F G
H F
Can i create a same unique key for above rows. The final output which i want is
Var1 Var2 Key
A B 1
B C 1
C D 1
D E 1
F G 2
H F 2
The general problem of assigning a group identifier based on row-to-row linkages can be very rich and difficult. However, for the sequential case the solution is not so bad.
Sample code
Presume the group identity changes when both variable values are not present in the prior row.
data have;input
Var1 $ Var2 $;datalines;
A B
B C
C D
D E
F G
H F
run;
data want;
set have;
group_id + ( var1 ne lag(var2) AND var2 ne lag(var1) );
run;
Complex case
#Vivek Gupta states in comments
There are random arrangement of rows in the dataset
Consider arbitrary rows p and q with items X and Y. Groups are created by linkages whose criteria is:
p.X = q.X
OR p.X = q.Y
OR p.y = q.x
OR p.y = q.y
A hash based solver will populate groups initially from a data scan. Repeated scans of data with hash lookups migrate items into lower groups (thus enlarging the group) until there is a scan with no migrations.
data pairs;
id + 1;
input item1 $ item2 $ ;
cards;
A B
C D
D E
B C
H F
X Y
F G
run;
data _null_ ;
length item $8 group 8;
retain item '' group .;
if 0 then set pairs;
declare hash pairs();
pairs.defineKey('item1', 'item2');
pairs.defineDone();
declare hash map(ordered:'A');
map.definekey ('item');
map.definedata ('item', 'group');
map.definedone();
_groupId = 0;
noMappings = 0;
nPass = 0;
do until (end);
set pairs end=end;
pairs.replace();
found1 = map.find(key:item1) eq 0; item1g = group;
found2 = map.find(key:item2) eq 0; item2g = group;
put item1= item2= found1= found2= item1g= item2=;
select;
when ( found1 and not found2) map.add(key:item2,data:item2,data:item1g);
when (not found1 and found2) map.add(key:item1,data:item1,data:item2g);
when (not found1 and not found2) do;
_groupId + 1;
map.add(key:item1,data:item1,data:_groupId);
map.add(key:item2,data:item2,data:_groupId);
end;
otherwise
;
end;
end;
declare hiter data('pairs');
do iteration = 1 to 1000 until (discrete);
put iteration=;
discrete = 1;
do index = 1 by 1 while (data.next() = 0);
found1 = map.find(key:item1) eq 0; item1g = group;
found2 = map.find(key:item2) eq 0; item2g = group;
put index= item1= item2= item1g= item2g=;
if (item1g < item2g) then do; map.replace(key:item2,data:item2,data:item1g); discrete=0; end;
if (item2g < item1g) then do; map.replace(key:item1,data:item1,data:item2g); discrete=0; end;
end;
end;
if discrete then put 'NOTE: discrete groups at' iteration=; else put 'NOTE: Groups not discrete after ' iteration=;
map.output(dataset:'map');
run;
Complex case #2
Groups are created by linkages whose criteria is
p.X = q.X
OR p.y = q.y
The following example is offsite and too long to post here.
How to create groups from rows associated by linkages in either of two variables
General statement of problem:
Given: P = p{i} = (p{i,1),p{i,2}), a set of pairs (key1, key2).
Find: The distinct groups, G = g{x}, of P,
such that each pair p in a group g has this property:
key1 matches key1 of any other pair in g.
-or-
key2 matches key2 of any other pair in g.
In short, the example shows
An iterative way using hashes.
Two hashes maintain the groupId assigned to each key value.
Two additional hashes are used to maintain group mapping paths.
When the data can be passed without causing a mapping, then the groups
have been fully determined.
A final pass is done
groupIds are assigned to each pair
data is output to a table
As you have not describe any logic so for your sample output below query will work
select Var1, Var2, 1 as [key]
from t
Related
I am trying to sort tuples inside a bag based on three fields in descending order..
Example : Suppose I have the following bag created by grouping:
{(s,3,my),(w,7,pr),(q,2,je)}
I want to sort the tuples in the above grouped bag based on $0,$1,$2 fields in such a way that first it will sort on $0 of all the tuples. It will pick the tuple with largest $0 value. If $0 are same for all the tuples then it will sort on $1 and so on.
The sorting should be for all the grouped bags through iterating process.
Suppose if we have databag something like:
{(21,25,34),(21,28,64),(21,25,52)}
Then according to the requirement output should be like:
{(21,25,34),(21,25,52),(21,28,64)}
Please let me know if you need any more clarification
Order your tuple in a nested foreach. This will work.
Input:
(1,s,3,my)
(1,w,7,pr)
(1,q,2,je)
A = LOAD 'file' using PigStorage(',') AS (a:chararray,b:chararray,c:chararray,d:chararray);
B = GROUP A BY a;
C = FOREACH B GENERATE A;
D = FOREACH C {
od = ORDER A BY b, c, d;
GENERATE od;
};
DUMP C Result(which resembles your data):
({(1,s,3,my),(1,w,7,pr),(1,q,2,je)})
Output:
({(1,q,2,je),(1,s,3,my),(1,w,7,pr)})
This will work for all the cases.
Generate tuple with highest value:
A = LOAD 'file' using PigStorage(',') AS (a:chararray,b:chararray,c:chararray,d:chararray);
B = GROUP A BY a;
C = FOREACH B GENERATE A;
D = FOREACH C {
od = ORDER A BY b desc , c desc , d desc;
od1 = LIMIT od 1;
GENERATE od1;
};
dump D;
Generate tuple with highest value if all the three fields are different, if all the tuples are same or if field 1 and field2 are same then return all the tuple.
A = LOAD 'file' using PigStorage(',') AS (a:chararray,b:chararray,c:chararray,d:chararray);
B = GROUP A BY a;
C = FOREACH B GENERATE A;
F = RANK C; //rank used to separate out the value if two tuples are same
R = FOREACH F {
dis = distinct A;
GENERATE rank_C,COUNT(dis) AS (cnt:long),A;
};
R3 = FILTER R BY cnt!=1; // filter if all the tuples are same
R4 = FOREACH R3 {
fil1 = ORDER A by b desc, c desc, d desc;
fil2 = LIMIT fil1 1;
GENERATE rank_C,fil2;
}; // find largest tuple except if all the tuples are same.
R5 = FILTER R BY cnt==1; // only contains if all the tuples are same
R6 = FOREACH R5 GENERATE A ; // generate required fields
F1 = FOREACH F GENERATE rank_C,FLATTEN(A);
F2 = GROUP F1 BY (rank_C, A::b, A::c); // group by field 1,field 2
F3 = FOREACH F2 GENERATE COUNT(F1) AS (cnt1:long) ,F1; // if count = 2 then Tuples are same on field 1 and field 2
F4 = FILTER F3 BY cnt1==2; //separate that alone
F5 = FOREACH F4 {
DIS = distinct F1;
GENERATE flatten(DIS);
};
F8 = JOIN F BY rank_C, F5 by rank_C;
F9 = FOREACH F8 GENERATE F::A;
Z = cross R4,F5; // cross done to genearte if all the tuples are different
Z1 = FILTER Z BY R4::rank_C!=F5::DIS::rank_C;
Z2 = FOREACH Z1 GENERATE FLATTEN(R4::fil2);
res = UNION Z2,R6,F9; // Z2 - contains value if all the three fields in the tuple are diff holds highest value,
//R6 - contains value if all the three fields in the tuple are same
//F9 - conatains if two fields of the tuples are same
dump res;
This is my first post, so please let me know if I'm not clear enough. Here's what I'm trying to do - this is my dataset. My approach for this is a do loop with a lag but the result is rubbish.
data a;
input #1 obs #4 mindate mmddyy10. #15 maxdate mmddyy10.;
format mindate maxdate date9.;
datalines;
1 01/02/2013 01/05/2013
2 01/02/2013 01/05/2013
3 01/02/2013 01/05/2013
4 01/03/2013 01/06/2013
5 02/02/2013 02/08/2013
6 02/02/2013 02/08/2013
7 02/02/2013 02/08/2013
8 03/10/2013 03/11/2013
9 04/02/2013 04/22/2013
10 04/10/2013 04/22/2013
11 05/04/2013 05/07/2013
12 06/10/2013 06/20/2013
;
run;
Now, I'm trying to produce a new column - "Replacement" based on the following logic:
If a record's mindate occurs before its lag's maxdate, it cannot be a replacement for it. If it cannot be a replacement, skip forward (so- 2,3,4 cannot replace 1, but 5 can).
Otherwise... if the mindate is less than 30 days, Replacement = Y. If not, replacement = N. Once a record replaces another (so, in this case, 5 does replace 1, because 02/02/2013 is <30 than 01/05/2013, it cannot duplicate as a replacement for another record. But if it's an N for one record above, it can still be a Y for some other record. So, 6 is now evaluated against 2, 7 against 3,etc. Since those two combos are both "Y", 8 is now evaluated versus 4, but because its mindate >30 relative to 4's maxdate, it's a N. But, it's then evaluated against against
And so on...
I should that in a 100 record dataset, this would imply that the 100th record could technically replace the 1st, so I've been trying lags within loops. Any tips/help is greatly appreciated! Expected output:
obs mindate maxdate Replacement
1 02JAN2013 05JAN2013
2 02JAN2013 05JAN2013
3 02JAN2013 05JAN2013
4 03JAN2013 06JAN2013
5 02FEB2013 08FEB2013 Y
6 02FEB2013 08FEB2013 Y
7 02FEB2013 08FEB2013 Y
8 10MAR2013 11MAR2013 Y
9 02APR2013 22APR2013 Y
10 10APR2013 22APR2013 N
11 04MAY2013 07MAY2013 Y
12 10JUN2013 20JUN2013 Y
I think this is correct if the asker was mistaken about replacement = Y for obs = 12.
/*Get number of obs so we can build a temporary array to hold the dataset*/
data _null_;
set have nobs= nobs;
call symput("nobs",nobs);
stop;
run;
data want;
/*Load the dataset into a temporary array*/
array dates[2,&NOBS] _temporary_;
if _n_ = 1 then do _n_ = 1 by 1 until(eof);
set have end = eof;
dates[1,_n_] = maxdate;
dates[2,_n_] = 0;
end;
set have;
length replacement $1;
replacement = 'N';
do i = 1 to _n_ - 1 until(replacement = 'Y');
if dates[2,i] = 0 and 0 <= mindate - dates[1,i] <= 30 then do;
replacement = 'Y';
dates[2,i] = _n_;
replaces = i;
end;
end;
drop i;
run;
You could use a hash object + hash iterator instead of a temporary array if you preferred. I've also included an extra var, replaces, to show which previous row each row replaces.
Here is a solution using SQL and hash tables. It is not optimal but it was the first method that sprang to mind.
/* Join the input with its self */
proc sql;
create table b as
select
a1.obs,
a2.obs as obs2
from a as a1
inner join a as a2
/* Set the replacement criteria */
on a1.maxdate < a2.mindate <= a1.maxdate + 30
order by a2.obs, a1.obs;
quit;
/* Create a mapping for replacements */
data c;
set b;
/* Create two empty hash tables so we can look up the used observations */
if _N_ = 1 then do;
declare hash h();
h.definekey("obs");
h.definedone();
declare hash h2();
h2.definekey("obs2");
h2.definedone();
end;
/* Check if we've already used this observation as a replacement */
if h2.find() then do;
/* Check if we've already replaced his observation */
if h.find() then do;
/* Add the observations to the hash table and output */
h2.add();
h.add();
output;
end;
end;
run;
/* Combine the replacement map with the original data */
proc sql;
select
a.*,
ifc(c.obs, "Y", "N") as Replace,
c.obs as Replaces
from a
left join c
on a.obs = c.obs2
order by a.obs;
quit;
There are several ways in which this can be simplified:
The dates can be brought through the first proc sql
The if statements can be combined
The final join could be replaced by a little extra logic in the data step
This is my current issue:
I have 53 variable headers in a SAS data set that need to be changed, for example:
Current_Week_0 TS | Current_Week_1 TS | Current_Week_2 TS -- etc.
I need it to change such that Current_Week_# TS = Current_Week_# -- dropping the TS
Is there a way to automate this such as looping it like:
i = 0,53
Current_week_i TS = Current_Week_i ?
I just don't understand the proper syntax.
Edit: Thank you for editing my formats Sergiu, appreciate it! :)
Edit:
I used the following code, but I get the following error:
Missing numeric suffix on a numbered variable list (TS-Current_Week_53)
DATA True_Start_8;
SET True_Start_7;
ARRAY oldnames (53) Current_Week_1 TS-Current_Week_53 TS;
ARRAY newnames (53) Current_Week_1-Current_Week_53;
DO i = 1 TO 53;
newnames(i) = oldnames(i) ;
END;
RUN;
#Joe EDIT
Here's what the data looks like before and after the "denorm" / transpose
BEFORE
Product ID CurrentWeek Market TS
X 75av2kz Current_Week_0 Z 1
Y 7sav2kz Current_Week_0 Z 1
X 752v2kz Current_Week_1 Z 1
Y 255v2kz Current_Week_1 Z 1
Product ID Market Current_Week_0_TS Current_Week_1_TS
X 75av2kz Z 1 0
Y 7sav2kz Z 1 1
X 752v2kz Z 1 1
Y 255v2kz Z 1 0
This isn't too hard. I assume these are variable labels.
proc sql;
select cats('%relabel_nots(',name,')') into :relabellist separated by ' '
from dictionary.columns
where libname='WORK' and memname='True_Start_7'
and name like '%TS'; *you may need to upper case the dataset name (memname) depending on your OS;
quit;
%macro relabel_nots(name);
label &name.= substr(vlabel(&name.),1,length(vlabel(&name.))-3);
%mend relabel_nots;
data want;
set True_Start_7;
&relabellist.;
run;
Basically the PROC SQL grabs the different names that qualify for the relabelling, and generates a large macro variable with all of the rename macro calls. The relabel_nots macro generates the new labels. You may need to change the logic behind the WHERE in the PROC SQL if the variable names don't also contain the TS.
Another option is to do this in the transpose. Your example data either doesn't match the example desired output, or there is something in logic not explained, but this does the simple transpose; if there is a logical reason that the current_week_0/1 are different in yours than in the below, explain why.
data have;
format currentWeek $20.;
input Product $ ID $ CurrentWeek $ Market $ TS;
datalines;
X 75av2kz Current_Week_0 Z 1
Y 7sav2kz Current_Week_0 Z 1
X 752v2kz Current_Week_1 Z 1
Y 255v2kz Current_Week_1 Z 1
;;;;
run;
proc sort data=have;
by market id product;
run;
proc transpose data=have out=want;
by market id product ;
id currentWeek;
var TS;
run;
I have a connection data set with each row marks A connects B in the form A B. The direct connection between A and B appears only once, either in the form A B or B A. I want to find all the connections at most one hop away, i.e. A and C are at most one hop away, if A and C are directly connected, or A connects C through some B.
For example, I have the following direct connection data
1 2
2 4
3 7
4 5
Then the resulting data I want is
1 {2,4}
2 {1,4,5}
3 {7}
4 {1,2,5}
5 {2,4}
7 {3}
Could anybody help me to find a way as efficient as possible? Thank you.
You could do this:
myudf.py
#outputSchema('bagofnums: {(num:int)}')
def merge_distinct(b1, b2):
out = []
for ignore, n in b1:
out.append(n)
for ignore, n in b2:
out.append(n)
return out
script.pig
register 'myudf.py' using jython as myudf ;
A = LOAD 'foo.in' USING PigStorage(' ') AS (num: int, link: int) ;
-- Essentially flips A
B = FOREACH A GENERATE link AS num, num AS link ;
-- We need to union the flipped A with A so that we will know:
-- 3 links to 7
-- 7 links to 3
-- Instead of just:
-- 3 links to 7
C = UNION A, B ;
-- C is in the form (num, link)
-- You can't do JOIN C BY link, C BY num ;
-- So, T just is C replicated
T = FOREACH D GENERATE * ;
D = JOIN C BY link, T BY num ;
E = FOREACH (FILTER E BY $0 != $3) GENERATE $0 AS num, $3 AS link_hopped ;
-- The output from E are (num, link) pairs where the link is one hop away. EG
-- 1 links to 2
-- 2 links to 4
-- 3 links to 7
-- The output will be:
-- 1 links to 4
F = COGROUP C BY num, E BY num ;
-- I use a UDF here to merge the bags together. Otherwise you will end
-- up with a bag for C (direct links) and E (links one hop away).
G = FOREACH F GENERATE group AS num, myudf.merge_distinct(C, E) ;
Schema and output for G using your sample input:
G: {num: int,bagofnums: {(num: int)}}
(1,{(2),(4)})
(2,{(4),(1),(5)})
(3,{(7)})
(4,{(5),(2),(1)})
(5,{(4),(2)})
(7,{(3)})
Is it possible to (efficiently) select a random tuple from a bag in pig?
I can just take the first result of a bag (as it is unordered), but in my case I need a proper random selection.
One (not efficient) solution is counting the number of tuples in the bag, take a random number within that range, loop through the bag, and stop whenever the number of iterations matches my random number. Does anyone know of faster/better ways to do this?
You could use RANDOM(), ORDER and LIMIT in a nested FOREACH statement to select one element with the smallest random number:
inpt = load 'group.txt' as (id:int, c1:bytearray, c2:bytearray);
groups = group inpt by id;
randoms = foreach groups {
rnds = foreach inpt generate *, RANDOM() as rnd; -- assign random number to each row in the bag
ordered_rnds = order rnds by rnd;
one_tuple = limit ordered_rnds 1; -- select tuple with the smallest random number
generate group as id, one_tuple;
};
dump randoms;
INPUT:
1 a r
1 a t
1 b r
1 b 4
1 e 4
1 h 4
1 k t
2 k k
2 j j
3 a r
3 e l
3 j l
4 a r
4 b t
4 b g
4 h b
4 j d
5 h k
OUTPUT:
(1,{(1,b,r,0.05172709255901231)})
(2,{(2,k,k,0.14351660053632986)})
(3,{(3,e,l,0.0854104195792681)})
(4,{(4,h,b,8.906013598960483E-4)})
(5,{(5,h,k,0.6219490873384448)})
If you run "dump randoms;" multiple times, you should get different results for each run.
Writing a UDF might give you better performance as you do not need to do secondary sort on random within the bag.
I needed to do this myself, and found surprisingly that a very simple answer seems to work, to get about 10% of an alias A:
B = filter A by RANDOM() < 0.1