How to filter out first line from an variable in pig - apache-pig

I imported a cvs file to an variable like below:
basketball_players = load '/usr/data/basketball_players.csv' using PigStorage(',');
below is the output of the first 3 lines:
tmp = limit basketball_players 3;
dump tmp
("playerID","year","stint","tmID","lgID","GP","GS","minutes","points","oRebounds","dRebounds","rebounds","assists","steals","blocks","turnovers","PF","fgAttempted","fgMade","ftAttempted","ftMade","threeAttempted","threeMade","PostGP","PostGS","PostMinutes","PostPoints","PostoRebounds","PostdRebounds","PostRebounds","PostAssists","PostSteals","PostBlocks","PostTurnovers","PostPF","PostfgAttempted","PostfgMade","PostftAttempted","PostftMade","PostthreeAttempted","PostthreeMade","note")
("abramjo01","1946","1","PIT","NBA","47","0","0","527","0","0","0","35","0","0","0","161","834","202","178","123","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0",)
("aubucch01","1946","1","DTF","NBA","30","0","0","65","0","0","0","20","0","0","0","46","91","23","35","19","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0",)
you can see that the first line is the header of the table. I use below command to filter out the first line but it doesn't work.
grunt> players_raw = filter basketball_players by $1 > 0;
2017-05-06 11:03:36,389 [main] WARN org.apache.pig.newplan.BaseOperatorPlan - Encountered Warning IMPLICIT_CAST_TO_INT 6 time(s).
when I dump the value of players_raw it returns empty. How can I filter the first row out from an variable?

Use RANK to generate a new column that will add row numbers to the dataset.Use that column to filter the first row.
basketball_players = load '/usr/data/basketball_players.csv' using PigStorage(',');
ranked = rank basketball_players;
basketball_players_without_header = Filter ranked by (rank_basketball_players > 1);
DUMP basketball_players_without_header;
Another way to do this
basketball_players = load '/usr/data/basketball_players.csv' using PigStorage(',');
basketball_players_without_header = Filter basketball_players by ($0 matches '.*playerID.*');
DUMP basketball_players_without_header;

Related

What is the difference between the cogroup and joins in PIG?

Using Apache PIG.
Sample Data: {1,2,3,4,5,6,7,8}.
I want the output in the following manner :
{1,2},{2,3},{3,4},{4,5},{5,6},{6,7},{7,8}
Not sure what you are looking for.See here for COGROUP & GROUP
Now for the question related to the data.
Load the numbers, filter the numbers to exclude 8, then foreach number generate number,number + 1.
A = LOAD 'data.txt' USING PigStorage(',') as (numbers:int);
B = FILTER A BY (numbers < 8)
C = FOREACH B GENERATE numbers, numbers+1;
DUMP C;

Pig min command and order by

I have data in the form of shell, $917.14,$654.23,2013
I have to find out the minimum value in column $1 and $2
I tried to do a order by these columns by asc order
But the answer is not coming out correct. Can anyone please help?
Refer MIN
A = LOAD 'test1.txt' USING PigStorage(',') as (f1:chararray,f2:float,f3:float,f4:int,f5:int,f6:int);
B = GROUP A ALL;
C = FOREACH B GENERATE MIN(A.f2),MIN(A.f3);
DUMP C;
EDIT1: The data you are loading has '$' in it.You will either have to clean it up and load it to a float field to apply MIN function or load it into a chararray and replace the '$' and then cast it to float and apply the MIN function.
EDIT2: Here is the solution without removing the $ in the original data but handling it in the PigScript.
Input:
shell,$820.48,$11992.70,996,891,1629
shell,$817.12,$2105.57,1087,845,1630
Bharat,$974.48,$5479.10,965,827,1634
Bharat,$943.70,$9162.57,939,895,1635
PigScript
A = LOAD 'test5.txt' USING TextLoader() as (line:chararray);
A1 = FOREACH A GENERATE REPLACE(line,'([^a-zA-Z0-9.,\\s]+)','');
B = FOREACH A1 GENERATE FLATTEN(STRSPLIT($0,','));
B1 = FOREACH B GENERATE $0,(float)$1,(float)$2,(int)$3,(int)$4,(int)$5;
C = GROUP B1 ALL;
D = FOREACH C GENERATE CONCAT('$',(chararray)MIN(B1.$1)),CONCAT('$',(chararray)MIN(B1.$2));
DUMP D;
Output

I have a seemingly simple Pig generate and then filter issue

I am trying to run a simple Pig script on a simple csv file and I can not get FILTER to do what I want. I have a test.csv file that looks like this:
john,12,44,,0
bob,14,56,5,7
dave,13,40,5,5
jill,8,,,6
Here is my script that does not work:
people = LOAD 'hdfs:/whatever/test.csv' using PigStorage(',');
data = FOREACH people GENERATE $0 AS name:chararray, $1 AS first:int, $4 AS second:int;
filtered = FILTER data BY first == 13;
DUMP filtered;
When I dump data, everything looks good. I get the name and the first and last integer as expected. When I describe the data, everything looks good:
data: {name: bytearray,first: int,second: int}
When I try and filter out data by the first value being 13, I get nothing. DUMP filtered simply returns nothing. Oddly enough, if I change it to first > 13, then all "rows" will print out.
However, this script works:
peopletwo = LOAD 'hdfs:/whatever/test.csv' using PigStorage(',') AS (f1:chararray,f2:int,f3:int,f4:int,f5:int);
datatwo = FOREACH peopletwo GENERATE $0 AS name:chararray, $1 AS first:int, $4 AS second:int;
filteredtwo = FILTER datatwo BY first == 13;
DUMP filteredtwo;
What is the difference between filteredtwo and filtered (or data and datatwo for that matter)? I want to know why the new relation obtained using GENERATE (i.e. data) won't filter in the first script as one would expect.
Specify the datatype in the load itself.See below
people = LOAD 'test5.csv' USING PigStorage(',') as (f1:chararray,f2:int,f3:int,f4:int,f5:int);
filtered = FILTER people BY f2 == 13;
DUMP filtered;
Output
Changing the filter to use > gives
filtered = FILTER people BY f2 > 13;
Output
EDIT
When converting from bytearray you will have to explicitly cast the value of the fields in the FOREACH.This works.
people = LOAD 'test5.csv' USING PigStorage(',');
data = FOREACH people GENERATE $0 AS name:chararray,(int)$1 AS f1,(int)$4 AS f2;
filtered = FILTER data BY f1 == 13;
DUMP filtered;

How to get the number of words per line in pig?

I'm trying to figure out how many words their are per line in a file in pig. I've gotten as far as loading and splitting:
raw = load file;
words = FOREACH raw GENERATE TOKENIZE(*);
which gets me a bag of tulples each containing a word. Then I go to count these items I get an error:
counts = FOREACH words GENERATE COUNT(*);
I get an error:
org.apache.pig.backend.executionengine.ExecException: ERROR 2106: Error while computing count in COUNT
...
Caused by: java.lang.NullPointerException
is that because some of the lines have an empty bag? or is there something else I'm doing wrong?
if it is the problem with an empty bag then you can try something like this: (Not tested)
raw = load file;
words = FOREACH raw GENERATE TOKENIZE(*) as tokenized_words;
counts = FOREACH words GENERATE ((tokenized_words IS null or TRIM(tokenized_words) == '') ? 0 : COUNT(*)) as total_count;
here we are writing if-else condition to check if the tokenized_words is null or empty, if yes then we are assigning zero to it else the total count.
Can you try like this?
input
Hi hello how are you
this is apache pig
works
like a charm
Pigscript:
A = LOAD 'input' AS (line:chararray);
B = FOREACH A GENERATE TOKENIZE(line);
C = FOREACH B GENERATE COUNT($0);
DUMP C;
Output:
(5)
(4)
(1)
()
(3)

Pig Aggregrate functions

My Input file is below
a,t1,1000,100
a,t1,2000,200
b,t2,1000,200
b,t2,5000,100
How to find count of distinct $0 in the above file.
myinput = LOAD 'file' AS(a1:chararray,a2:chararray,amt:int,rate:int);
After the above script what needs to done.
Also Can I use that distinct count for dividing some other is a different relation
First of all, the way you read the data is incorrect. If you try to dump "myinput", youll see that the whole row is read in the first field (a1), while the others are empty.
The reason is that you don't specify a LOAD function, and a default function is the PigStorage() built-in function which expects tab-delimited file (so it ignores your commas!).You need to explicitly specify a load function (e.g. PigStorage()) via the using clause and pass it arguments:
myInput = LOAD file' using PigStorage(',');
myInput2 = FOREACH myInput GENERATE $0 as (a1:chararray), $1 as (a2:chararray), $2 as (amt:int), $3 as (rate:int);
Moving on, to find the DISTINCT $0 first you have to extract field $0 in a separate relation. The reason is that the DISTINCT statement works on entire records, rather than on separate fields.
myField = FOREACH myInput2 GENERATE a1;
distinctA1 = DISTINCT myField;
Now the result of distinctA1 is {(a), (b)}. By using now group all, you will group together all of your records together, and then what is left is to COUNT them:
grouped = GROUP distinctA1 all;
countA1 = FOREACH grouped GENERATE COUNT(distinctA1);
And now you're happy. :)
The complete code:
myInput = LOAD 'file' using PigStorage(',');
myInput2 = FOREACH myInput GENERATE $0 as (a1:chararray), $1 as (a2:chararray), $2 as (amt:int), $3 as (rate:int);
a1 = FOREACH myInput2 GENERATE a1;
distinctA1 = DISTINCT a1;
grouped = GROUP distinctA1 all;
countA1 = FOREACH grouped GENERATE COUNT(distinctA1);
You can do something like this :
myInput = LOAD 'file.txt' USING PigStorage(',') AS (a1:chararray,a2:chararray,amt:int,rate:int);
Data = GROUP myInput BY $0;
Data = FOREACH Data GENERATE $0;
Data = GROUP Data ALL;
Data = FOREACH Data GENERATE $0,COUNT($1);
NB: By Grouping on $0 you are doing the same thing as a distinct and you get better performance ;)