Pig latin search variabale name and min value - apache-pig

I have a pig Latin question. I have a table with the following:
ID:Seller:Price:BID
1:John:20:B1
1:Ben:25:B1
2:John:60:B2
2:Chris:35:B2
3:John:20:B3
I'm able to group the table by ID using the following (assuming A is the LOAD table):
W = GROUP A BY ID;
But what I can't seem to figure out is the command to only return the values for the lowest price for each ID. In this example the final output should be:
1:John:20:B1
2:Chris:35:B2
3:John:20:B3
Cheers,
Shivedog

Generally you'll want to GROUP by the BID, then use MIN. However, since you want the whole tuple associated with the minimum value you'll want to use a UDF to do this.
myudfs.py
#outputSchema('vals: (ID: int, Seller: chararray, Price: chararray, BID: chararray)')
def get_min_tuple(bag):
return min(bag, key=lambda x: x[2])
myscript.pig
register 'myudfs.py' using jython as myudfs ;
-- A: (ID: int, Seller: chararray, Price: chararray, BID: chararray)
B = GROUP A BY BID ;
C = FOREACH B GENERATE group AS BID, FLATTEN(myudfs.get_min_tuple(A)) ;
-- Now you can do the JOIN to get the type of novel on C
Remember change the types (int,chararray,etc.) to the appropriate values.
Note: If multiple items in A have the same minimum price for an ID, then this will only return one of them.

option (1) - get all records with maximum price:
Use the new (Pig 0.11) RANK operator:
A = LOAD ...;
B = RANK A BY Price DESC;
C = FILTER B BY $0=1;
option (2) - get all records with maximum price:
Pig version below 0.11:
a = load ...;
b = group a by all;
c = foreach b generate MAX(a.price) as maxprice;
d = JOIN a BY price, c BY maxprice;
option (3) - use org.apache.pig.piggybank.evaluation.ExtremalTupleByNthField to get one of the tuples with maximum price:
define mMax org.apache.pig.piggybank.evaluation.ExtremalTupleByNthField( '4', 'max' );
a = load ...;
b = group a by all;
c - foreach b generate mMax(a);

Related

Take MIN EFF_DT and MAX_CANC_dt from data in PIG

Schema :
TYP|ID|RECORD|SEX|EFF_DT|CANC_DT
DMF|1234567|98765432|M|2011-08-30|9999-12-31
DMF|1234567|98765432|M|2011-04-30|9999-12-31
DMF|1234567|98765432|M|2011-04-30|9999-12-31
Suppose i have multiple records like this. I only want to display records that have minimum eff_dt and maximum cancel date.
I only want to display just This 1 record
DMF|1234567|98765432|M|2011-04-30|9999-12-31
Thank you
Get min eff_dt and max canc_dt and use it to filter the relation.Assuming you have a relation A
B = GROUP A ALL;
X = FOREACH B GENERATE MIN(A.EFF_DT);
Y = FOREACH B GENERATE MAX(A.CANC_DT);
C = FILTER A BY ((EFF_DT == X.$0) AND (CANC_DT == Y.$0));
D = DISTINCT C;
DUMP D;
Let's say you have this data (sample here):
DMF|1234567|98765432|M|2011-08-30|9999-12-31
DMF|1234567|98765432|M|2011-04-30|9999-12-31
DMF|1234567|98765432|M|2011-04-30|9999-12-31
DMX|1234567|98765432|M|2011-12-30|9999-12-31
DMX|1234567|98765432|M|2011-04-30|9999-12-31
DMX|1234567|98765432|M|2011-04-01|9999-12-31
Perform these steps:
-- 1. Read data, if you have not
A = load 'data.txt' using PigStorage('|') as (typ: chararray, id:chararray, record:chararray, sex:chararray, eff_dt:datetime, canc_dt:datetime);
-- 2. Group data by the attribute you like to, in this case it is TYP
grouped = group A by typ;
-- 3. Now, generate MIN/MAX for each group. Also, only keep relevant fields
min_max = foreach grouped generate group, MIN(A.eff_dt) as min_eff_dt, MAX(A.canc_dt) as max_canc_dt;
--
dump min_max;
(DMF,2011-04-30T00:00:00.000Z,9999-12-31T00:00:00.000Z)
(DMX,2011-04-01T00:00:00.000Z,9999-12-31T00:00:00.000Z)
If you need to, change datetime to charrary.
Note: there are different ways of doing this, what I am showing, except the load step, it produces the desired result in 2 steps: GROUP and FOREACH.

PIG: sum and division, creating an object

I am writing a pig program that loads a file that separates its entires with tabs
ex: name TAB year TAB count TAB...
file = LOAD 'file.csv' USING PigStorage('\t') as (type: chararray, year: chararray,
match_count: float, volume_count: float);
-- Group by type
grouped = GROUP file BY type;
-- Flatten
by_type = FOREACH grouped GENERATE FLATTEN(group) AS (type, year, match_count, volume_count);
group_operat = FOREACH by_type GENERATE
SUM(match_count) AS sum_m,
SUM(volume_count) AS sum_v,
(float)sum_m/sm_v;
DUMP group_operat;
The issue lies in the group operations object I am trying to create. I'm wanting to sum all the match counts, sum all the volume counts and divide the match counts by volume counts.
What am I doing wrong in my arithmetic operations/object creation?
An error I receive is line 7, column 11> pig script failed to validate: org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1031: Incompatable schema: left is "type:NULL,year:NULL,match_count:NULL,volume_count:NULL", right is "group:chararray"
Thank you.
Try like this, this will return type and sum.
UPDATED the working code
input.txt
A 2001 10 2
A 2002 20 3
B 2003 30 4
B 2004 40 1
PigScript:
file = LOAD 'input.txt' USING PigStorage() AS (type: chararray, year: chararray,
match_count: float, volume_count: float);
grouped = GROUP file BY type;
group_operat = FOREACH grouped {
sum_m = SUM(file.match_count);
sum_v = SUM(file.volume_count);
GENERATE group,(float)(sum_m/sum_v) as sum_mv;
}
DUMP group_operat;
Output:
(A,6.0)
(B,14.0)
try this,
file = LOAD 'file.csv' USING PigStorage('\t') as (type: chararray, year: chararray,
match_count: float, volume_count: float);
grouped = GROUP file BY (type,year);
group_operat = FOREACH grouped GENERATE group,
SUM(file.match_count) AS sum_m,
SUM(file.volume_count) AS sum_v,
(float)(SUM(file.match_count)/SUM(file.volume_count)) as sum_mv;
Above script give result group by type and year, if you want only group by type then remove from grouped
grouped = GROUP file BY type;
group_operat = FOREACH grouped GENERATE group,file.year,
SUM(file.match_count) AS sum_m,
SUM(file.volume_count) AS sum_v,
(float)(SUM(file.match_count)/SUM(file.volume_count)) as sum_mv;

Pig - Calculating percentage of total for a field

I am trying to calculate % of total for a value for in a field.
For example, for data (name, ct)
(john, 1000)
(Dan, 2000)
(liz, 2000)
I want the output to be (name, % of ct to the total)
(john, .2)
(Dan, .4)
(liz, .4)
data = load 'fakedata.txt' as (name:chararray,sqr:chararray,ct:int);
A = foreach data generate name, ct;
A = FILTER A by ct is not null;
B = group A all;
C = foreach B generate SUM(A.ct) as tot;
D = foreach A generate name, ct/(double)C.tot;
dump D;
ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1000: Error during parsing. Invalid alias: C in {name: bytearray,ct: int}
I am following exactly how it is given in the http://pig.apache.org/docs/r0.10.0/basic.html
an example code in section - "Casting Relations to Scalars"
If I say Dump C, then the output is correctly generated as 5000. So there is a problem in the D. Any help is greatly appreciated.
The below works for me without any error. This is basically same as what you have. Not sure why you are getting this error. Which version of pig are you using?
data = load 'StackData' as (name:chararray, marks:int);
grp = GROUP data all;
allcount = foreach grp generate SUM(data.marks) as total;
perc = foreach data generate name, marks/(double)allcount.total;
dump perc
In Relation D, you are looping over Relation A again - it knows knowing about C.
I'd suggest calculating the SUM, then doing JOIN so each entry contains the sum. That way you'll be able to calculate the % total for each entry.

accessing an element like array in pig

I have data in the form:
id,val1,val2
example
1,0.2,0.1
1,0.1,0.7
1,0.2,0.3
2,0.7,0.9
2,0.2,0.3
2,0.4,0.5
So first I want to sort each id by val1 in decreasing order..so somethng like
1,0.2,0.1
1,0.2,0.3
1,0.1,0.7
2,0.7,0.9
2,0.4,0.5
2,0.2,0.3
And then select the second element id,val2 combination for each id
So for example:
1,0.3
2,0.5
How do I approach this?
Thanks
Pig is a scripting language and not relational one like SQL, it is well suited to work with groups with operators nested inside a FOREACH. Here is the solutions:
A = LOAD 'input' USING PigStorage(',') AS (id:int, v1:float, v2:float);
B = GROUP A BY id; -- isolate all rows for the same id
C = FOREACH B { -- here comes the scripting bit
elems = ORDER A BY v1 DESC; -- sort rows belonging to the id
two = LIMIT elems 2; -- select top 2
two_invers = ORDER two BY v1 ASC; -- sort in opposite order to bubble second value to the top
second = LIMIT two_invers 1;
GENERATE FLATTEN(group) as id, FLATTEN(second.v2);
};
DUMP C;
In your example id 1 has two rows with v1 == 0.2 but different v2, thus the second value for the id 1 can be 0.1 or 0.3
A = LOAD 'input' USING PigStorage(',') AS (id:int, v1:int, v2:int);
B = ORDER A BY id ASC, v1 DESC;
C = FOREACH B GENERATE id, v2;
DUMP C;

Pig split and join

I have a requirement to propagate field values from one row to another given type of record
for example my raw input is
1,firefox,p
1,,q
1,,r
1,,s
2,ie,p
2,,s
3,chrome,p
3,,r
3,,s
4,netscape,p
the desired result
1,firefox,p
1,firefox,q
1,firefox,r
1,firefox,s
2,ie,p
2,ie,s
3,chrome,p
3,chrome,r
3,chrome,s
4,netscape,p
I tried
A = LOAD 'file1.txt' using PigStorage(',') AS (id:int,browser:chararray,type:chararray);
SPLIT A INTO B IF (type =='p'), C IF (type!='p' );
joined = JOIN B BY id FULL, C BY id;
joinedFields = FOREACH joined GENERATE B::id, B::type, B::browser, C::id, C::type;
dump joinedFields;
the result I got was
(,,,1,p )
(,,,1,q)
(,,,1,r)
(,,,1,s)
(2,p,ie,2,s)
(3,p,chrome,3,r)
(3,p,chrome,3,s)
(4,p,netscape,,)
Any help is appreciated, Thanks.
PIG is not exactly SQL, it is built with data flows, MapReduce and groups in mind (joins are also there). You can get the result using a GROUP BY, FILTER nested in the FOREACH and FLATTEN.
inpt = LOAD 'file1.txt' using PigStorage(',') AS (id:int,browser:chararray,type:chararray);
grp = GROUP inpt BY id;
Result = FOREACH grp {
P = FILTER inpt BY type == 'p'; -- leave the record that contain p for the id
PL = LIMIT P 1; -- make sure there is just one
GENERATE FLATTEN(inpt.(id,type)), FLATTEN(PL.browser); -- convert bags produced by group by back to rows
};