Pig Optimization on Group by - apache-pig

Lets assume that i have a large data set as per below schema layout
id,name,city
---------------
100,Ajay,Chennai
101,John,Bangalore
102,Zach,Chennai
103,Deep,Bangalore
....
...
I have two style of pig code giving me the same output.
Style 1 :
records = load 'user/inputfiles/records.txt' Using PigStorage(',') as (id:int,name:chararray,city:chararray);
records_grp = group records by city;
records_each = foreach records_grp generate group as city,COUNT(records.id) as emp_cnt;
dump records_each;
Style 2 :
records = load 'user/inputfiles/records.txt' Using PigStorage(',') as (id:int,name:chararray,city:chararray);
records_each = foreach (group records by city) generate group as city,COUNT(records.id) as emp_cnt;
dump records_each ;
In second style i used a nested Foreach. Does it style 2 code run faster than style 1 code or not.
I Would like to reduce the total time taken to complete that pig job..
So the Style 2 code achieve that ? Or there is no impact in total time taken?
If somebody confirms me then i can run similar code in my cluster with very large dataset

The solutions will have same costs.
However if records_grp is not used elsewhere, the version 2 allows you to not declare a variable and your script is shorter.

Related

pig: getting count of strings from file A from file B

I am new to PIG, I am looking for help.
I have two files A(templates) and B(data) both are having huge unstructured content.
Agenda is to traverse the file B(data) and find the count against each template(line) of file A.
I think it should work in a loop with the nested statement but I do not know how can I achieve the same in pig.
example:-
file1.txt
hello ravi
hi mohit
bye sameer
hi mohit
hi abc
hello cds
hi assaad
file2.txt
hi mohit
hi assaad
I need a count of file2 both lines.
The expected output may look like:-
hi mohit: 2
hi assaad: 1
Please do let me know.
Lets start by loading both your datasets:
data = LOAD 'file1.txt' AS (line:chararray);
templates = LOAD 'file2.txt' AS (template:chararray);
Now we essentially need to JOIN the above relations on the templates. Once joined, we can GROUP on the template to get counts for each template. However that would required 2 map-reduce stage, one for the JOIN and one for the GROUP BY. Here is where you can use COGROUP. It is an extremely useful operation and you can read more about it here: https://www.tutorialspoint.com/apache_pig/apache_pig_cogroup_operator.htm
cogroupedData = COGROUP data BY line, templates BY template;
templateLines = FILTER cogroupedData BY (NOT ISEmpty(templates));
templateCounts = FOREACH templateLines GENERATE
group AS template,
COUNT(data.line) AS templateCount;
DUMP templateCounts;
What COGROUP does is essentially similar to a JOIN and then a GROUP BY on the same key (template in this case). It takes only one map-reduce stage. The filter applied above is to remove records which did not have a template in file2.txt

Pig calculating avg of delay fails

I have a file for airplanes data, having airplane dest and delay(delay can be negative or positve number)
A = load ‘flightdelays’ using Pigstorage(‘,’);
B = foreach a generate $14 as delay:int, $17 as dest:chararray;
C = group b all; -- this is failing for cast error, also get an error failed to read data from input file..
D =foreach c generate b.dest, AVG(b.delay);
When i execute this , i get 0 records read from source file and mapreduce job failed..
Why is it not able to calculate AVG?
Check the extension/path of the file.Is your file comma separated? Also,there are plenty of case issues with your script.
PigStorage - s is small in your load statement.
A = load ‘flightdelays’ using PigStorage(‘,’);
B = foreach a generate $14 as delay:int, $17 as dest:chararray;
There is no relation called a,b,c.You are loading data to relation A and so on.
1st thing A,a treated differently(in pig relation names are case sensitive) and 2nd thing while calculating Aggregate function on relation and group by on any attribute..
In FOREACH you should specify grouping attribute and aggregate function..
In this scenario you used group by all so you can't use b.dest along with aggregate function..
If you want destination wise AVG() delay then you should group by dest..

What's the effective way to count rows in Pig?

In Pig, what is the effective way to get count? We can do a GROUP ALL, but this is given only 1 reducer. When the data size is very large,say n Terabytes, can we try multiple reducers somehow?
dataCount = FOREACH (GROUP data ALL) GENERATE
'count' as metric,
COUNT(dataCount) as value;
Instead of using directly a GROUP ALL, you could divide it into two steps. First, group by some field and count the number of rows. And then, perform a GROUP ALL to sum all of these counts. This way, you would be able to count the number of rows in parallel.
Note, however, that if the field you use in the first GROUP BY does not have duplicates, the resulting counts will all be of 1 so there wont be any difference. Try using a field that has many duplicates to improve its performance.
See this example:
a;1
a;2
b;3
b;4
b;5
If we first group by the first field, which has duplicates, the final COUNT will deal with 2 rows instead of 5:
A = load 'data' using PigStorage(';');
B = group A by $0;
C = foreach B generate COUNT(A);
dump C;
(2)
(3)
D = group C all;
E = foreach D generate SUM(C.$0);
dump E;
(5)
However, if we group by the second one, which is unique, it will deal with 5 rows:
A = load 'data' using PigStorage(';');
B = group A by $1;
C = foreach B generate COUNT(A);
dump C;
(1)
(1)
(1)
(1)
(1)
D = group C all;
E = foreach D generate SUM(C.$0);
dump E;
(5)
I just dig a bit more in this topic, and it seems you don't have to afraid that a single reducer will have to process enormous amount of data if you're using an up-to-date pig version.
The algebraic UDF-s will handle the COUNT smart, and it's calculated on the mapper. So the reducer just have to deal with the aggregated data (counts/mapper).
I think it's introduced in 0.9.1, but 0.14.0 definitely has it
Algebraic Interface
An aggregate function is an eval function that takes a bag and returns
a scalar value. One interesting and useful property of many aggregate
functions is that they can be computed incrementally in a distributed
fashion. We call these functions algebraic. COUNT is an example of an
algebraic function because we can count the number of elements in a
subset of the data and then sum the counts to produce a final output.
In the Hadoop world, this means that the partial computations can be
done by the map and combiner, and the final result can be computed by
the reducer.
But my previous answer was definitely wrong:
In the grouping you can use the PARALLEL n keyword this set the
number of reducers.
Increase the parallelism of a job by specifying the number of reduce
tasks, n. The default value for n is 1 (one reduce task).

How to split a data in particular column into two other columns using pig scripts?

Hi i am working in big data ,since i am a new bee to pig programming help me to get the required output.I have a csv file which have many columns,one of the column is price,which has data like the following:
(10 Lacs)
(20 to 30 Lacs)
And i need this to be splitted as
price min max
10 null null
null 20 30
I have tried the following code
a = LOAD '/user/folder1/filename.csv' using PigStorage(',')as(SourceWebsite:chararray,PropertyType:chararray,PropertyId:chararray,title:chararray,bedroom:int,bathroom:int,Balconies:chararray,price:chararray,pricepersqft:chararray,builtuparea:chararray,address:chararray,otherdetails:chararray,description:chararray,posted:chararray,Features:chararray,ContactDetails:chararray);
b = FOREACH a GENERATE STRSPLIT(price, 'to');
c = FOREACH b GENERATE FLATTEN(STRSPLIT(Price,',')) AS (MAX:int,MIN:int);
dump c;
Any help will be appreciated.
I just ran into the same issue, and here is how I managed to solve it.
Suppose the column called outputraw.outputlineraw looks like this:
abc|def
gh|j
Then I split it into multiple columns like so:
output_in_columns = FOREACH output_raw GENERATE
FLATTEN(STRSPLIT(output_line_raw,'\\|'));
To test whether it succeeded, I dumped the result after referring to the columns:
output_selection = FOREACH output_in_columns GENERATE
$0,
$1;
DUMP output_selection;

finding the sum of columns for each row in pig

I need to find the sum of columns in a every row.
Consider the data set
A,1,5,45,25,20
B,5,50,5,23,12
C,1,25,4,15,23
I am trying to get the output as below
(A,96)
(B,95)
(C,68)
I cannot use built in SUM function for this. Should I write custom UDF or is there any other way to do this
You can define the schema and try the below approach.
input:
A,1,5,45,25,20
B,5,50,5,23,12
C,1,25,4,15,23
PigScript:
A = LOAD 'input' USING PigStorage(',') AS(f1:chararray,f2:int,f3:int,f4:int,f5:int,f6:int);
B = FOREACH A GENERATE f1,SUM(TOBAG(f2..));
DUMP B;
Output:
(A,96)
(B,95)
(C,68)