Why Dump operator return a path? - apache-pig

I have a simple pig code :
CRE_28001 = LOAD '$input' USING PigStorage(';') AS (CIA_CD_CRV_CIA:chararray,CIA_DA_EM_CRV:chararray,CIA_CD_CTRL_BLCE:chararray);
-- Generer les colonnes du fichier
Data = FOREACH CRE_28001 GENERATE
(chararray) CIA_CD_CRV_CIA AS CIA_CD_CRV_CIA,
(chararray) CIA_DA_EM_CRV AS CIA_DA_EM_CRV,
(chararray) CIA_CD_CTRL_BLCE AS CIA_CD_CTRL_BLCE,
(chararray) RUB_202 AS RUB_202;
-- Etablir le filtre exigee
CRE_28001_FILTER = FILTER Data BY (RUB_202 == '6');
LIMIT_DATA = LIMIT CRE_28001_FILTER 10;
DUMP LIMIT_DATA;
I am sure that my filter is correct. The column RUB_202 has over than 100 lines having '6' as a value. I verified that many times
Look what I get :
Input(s):
Successfully read 444 records (583792 bytes) from: "/hdfs/data/adhoc/PR/02/RDO0/BB0/MGM28001-2019-08-19.csv"
Output(s):
Successfully stored 0 records in: "hdfs://ha-manny/hdfs/hadoop/pig/tmp/temp1618713487/tmp-1281522727"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_1549794175705_3500029 -> job_1549794175705_3500031,
job_1549794175705_3500031
Note that I didn't demand to save the data in hdfs://ha-manny/hdfs/hadoop/pig/tmp/temp1618713487/tmp-1281522727.
Why this was automatically generated and I can see any data description or presentation.
I get that also when I just look to see the resukt of the filter

The solution is to reference columns using their index number instead of names.
In other words :
Data = FOREACH CRE_28001 GENERATE
(chararray) $0 AS CIA_CD_CRV_CIA,
(chararray) $1 AS CIA_DA_EM_CRV,
(chararray) $2 AS CIA_CD_CTRL_BLCE,
(chararray) $3 AS RUB_202;
Then I used the TRIM operator , because there was some columns wich has empty spaces in data !
And it works

Related

How to filter out first line from an variable in pig

I imported a cvs file to an variable like below:
basketball_players = load '/usr/data/basketball_players.csv' using PigStorage(',');
below is the output of the first 3 lines:
tmp = limit basketball_players 3;
dump tmp
("playerID","year","stint","tmID","lgID","GP","GS","minutes","points","oRebounds","dRebounds","rebounds","assists","steals","blocks","turnovers","PF","fgAttempted","fgMade","ftAttempted","ftMade","threeAttempted","threeMade","PostGP","PostGS","PostMinutes","PostPoints","PostoRebounds","PostdRebounds","PostRebounds","PostAssists","PostSteals","PostBlocks","PostTurnovers","PostPF","PostfgAttempted","PostfgMade","PostftAttempted","PostftMade","PostthreeAttempted","PostthreeMade","note")
("abramjo01","1946","1","PIT","NBA","47","0","0","527","0","0","0","35","0","0","0","161","834","202","178","123","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0",)
("aubucch01","1946","1","DTF","NBA","30","0","0","65","0","0","0","20","0","0","0","46","91","23","35","19","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0",)
you can see that the first line is the header of the table. I use below command to filter out the first line but it doesn't work.
grunt> players_raw = filter basketball_players by $1 > 0;
2017-05-06 11:03:36,389 [main] WARN org.apache.pig.newplan.BaseOperatorPlan - Encountered Warning IMPLICIT_CAST_TO_INT 6 time(s).
when I dump the value of players_raw it returns empty. How can I filter the first row out from an variable?
Use RANK to generate a new column that will add row numbers to the dataset.Use that column to filter the first row.
basketball_players = load '/usr/data/basketball_players.csv' using PigStorage(',');
ranked = rank basketball_players;
basketball_players_without_header = Filter ranked by (rank_basketball_players > 1);
DUMP basketball_players_without_header;
Another way to do this
basketball_players = load '/usr/data/basketball_players.csv' using PigStorage(',');
basketball_players_without_header = Filter basketball_players by ($0 matches '.*playerID.*');
DUMP basketball_players_without_header;

I have a seemingly simple Pig generate and then filter issue

I am trying to run a simple Pig script on a simple csv file and I can not get FILTER to do what I want. I have a test.csv file that looks like this:
john,12,44,,0
bob,14,56,5,7
dave,13,40,5,5
jill,8,,,6
Here is my script that does not work:
people = LOAD 'hdfs:/whatever/test.csv' using PigStorage(',');
data = FOREACH people GENERATE $0 AS name:chararray, $1 AS first:int, $4 AS second:int;
filtered = FILTER data BY first == 13;
DUMP filtered;
When I dump data, everything looks good. I get the name and the first and last integer as expected. When I describe the data, everything looks good:
data: {name: bytearray,first: int,second: int}
When I try and filter out data by the first value being 13, I get nothing. DUMP filtered simply returns nothing. Oddly enough, if I change it to first > 13, then all "rows" will print out.
However, this script works:
peopletwo = LOAD 'hdfs:/whatever/test.csv' using PigStorage(',') AS (f1:chararray,f2:int,f3:int,f4:int,f5:int);
datatwo = FOREACH peopletwo GENERATE $0 AS name:chararray, $1 AS first:int, $4 AS second:int;
filteredtwo = FILTER datatwo BY first == 13;
DUMP filteredtwo;
What is the difference between filteredtwo and filtered (or data and datatwo for that matter)? I want to know why the new relation obtained using GENERATE (i.e. data) won't filter in the first script as one would expect.
Specify the datatype in the load itself.See below
people = LOAD 'test5.csv' USING PigStorage(',') as (f1:chararray,f2:int,f3:int,f4:int,f5:int);
filtered = FILTER people BY f2 == 13;
DUMP filtered;
Output
Changing the filter to use > gives
filtered = FILTER people BY f2 > 13;
Output
EDIT
When converting from bytearray you will have to explicitly cast the value of the fields in the FOREACH.This works.
people = LOAD 'test5.csv' USING PigStorage(',');
data = FOREACH people GENERATE $0 AS name:chararray,(int)$1 AS f1,(int)$4 AS f2;
filtered = FILTER data BY f1 == 13;
DUMP filtered;

Pig Aggregrate functions

My Input file is below
a,t1,1000,100
a,t1,2000,200
b,t2,1000,200
b,t2,5000,100
How to find count of distinct $0 in the above file.
myinput = LOAD 'file' AS(a1:chararray,a2:chararray,amt:int,rate:int);
After the above script what needs to done.
Also Can I use that distinct count for dividing some other is a different relation
First of all, the way you read the data is incorrect. If you try to dump "myinput", youll see that the whole row is read in the first field (a1), while the others are empty.
The reason is that you don't specify a LOAD function, and a default function is the PigStorage() built-in function which expects tab-delimited file (so it ignores your commas!).You need to explicitly specify a load function (e.g. PigStorage()) via the using clause and pass it arguments:
myInput = LOAD file' using PigStorage(',');
myInput2 = FOREACH myInput GENERATE $0 as (a1:chararray), $1 as (a2:chararray), $2 as (amt:int), $3 as (rate:int);
Moving on, to find the DISTINCT $0 first you have to extract field $0 in a separate relation. The reason is that the DISTINCT statement works on entire records, rather than on separate fields.
myField = FOREACH myInput2 GENERATE a1;
distinctA1 = DISTINCT myField;
Now the result of distinctA1 is {(a), (b)}. By using now group all, you will group together all of your records together, and then what is left is to COUNT them:
grouped = GROUP distinctA1 all;
countA1 = FOREACH grouped GENERATE COUNT(distinctA1);
And now you're happy. :)
The complete code:
myInput = LOAD 'file' using PigStorage(',');
myInput2 = FOREACH myInput GENERATE $0 as (a1:chararray), $1 as (a2:chararray), $2 as (amt:int), $3 as (rate:int);
a1 = FOREACH myInput2 GENERATE a1;
distinctA1 = DISTINCT a1;
grouped = GROUP distinctA1 all;
countA1 = FOREACH grouped GENERATE COUNT(distinctA1);
You can do something like this :
myInput = LOAD 'file.txt' USING PigStorage(',') AS (a1:chararray,a2:chararray,amt:int,rate:int);
Data = GROUP myInput BY $0;
Data = FOREACH Data GENERATE $0;
Data = GROUP Data ALL;
Data = FOREACH Data GENERATE $0,COUNT($1);
NB: By Grouping on $0 you are doing the same thing as a distinct and you get better performance ;)

pig latin - not showing the right record numbers

I have written a pig script for wordcount which works fine. I could see the results from pig script in my output directory in hdfs. But towards the end of my console, I see the following:
Success!
Job Stats (time in seconds):
JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MedianMapTime MaxReduceTime MinReduceTime AvgReduceTime MedianReducetime Alias Feature Outputs
job_local1695568121_0002 1 1 0 0 0 0 0 0 0 0 words_sorted SAMPLER
job_local2103470491_0003 1 1 0 0 0 0 0 0 0 0 words_sorted ORDER_BY /output/result_pig,
job_local696057848_0001 1 1 0 0 0 0 0 0 0 0 book,words,words_agg,words_grouped GROUP_BY,COMBINER
Input(s):
Successfully read 0 records from: "/data/pg5000.txt"
Output(s):
Successfully stored 0 records in: "/output/result_pig"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_local696057848_0001 -> job_local1695568121_0002,
job_local1695568121_0002 -> job_local2103470491_0003,
job_local2103470491_0003
2014-07-01 14:10:35,241 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
As you can see, the job is success. but not the Input(s) and output(s). Both of the them say successfully read/stored 0 records and the counter values are all 0.
why the value is zero. These should not be zero.
I am using hadoop2.2 and pig-0.12
Here is the script:
book = load '/data/pg5000.txt' using PigStorage() as (lines:chararray);
words = foreach book generate FLATTEN(TOKENIZE(lines)) as word;
words_grouped = group words by word;
words_agg = foreach words_grouped generate group as word, COUNT(words);
words_sorted = ORDER words_agg BY $1 DESC;
STORE words_sorted into '/output/result_pig' using PigStorage(':','-schema');
NOTE: my data is present in /data/pg5000.txt and not in default directory which is /usr/name/data/pg5000.txt
EDIT: here is the output of printing my file to console
hadoop fs -cat /data/pg5000.txt | head -10
The Project Gutenberg EBook of The Notebooks of Leonardo Da Vinci, Complete
by Leonardo Da Vinci
(#3 in our series by Leonardo Da Vinci)
Copyright laws are changing all over the world. Be sure to check the
copyright laws for your country before downloading or redistributing
this or any other Project Gutenberg eBook.
This header should be the first thing seen when viewing this Project
Gutenberg file. Please do not remove it. Do not change or edit the
cat: Unable to write to output stream.
Please correct the following line
book = load '/data/pg5000.txt' using PigStorage() as (lines:chararray);
to
book = load '/data/pg5000.txt' using PigStorage(',') as (lines:chararray);
I am assuming the delimiter as comma here use the one which is used to separate the records in your file. This will solve the issue
Also note --
If no argument is provided, PigStorage will assume tab-delimited format. If a delimiter argument is provided, it must be a single-byte character; any literal (eg: 'a', '|'), known escape character (eg: '\t', '\r') is a valid delimiter.

Pig Script - Min, Avg, Max

Let us say I have these in a file ...
1
2
3
Using a Pig Script, how can I get this (number, minimum, mean, maximum in each line) ?
1,1,2,3
2,1,2,3
3,1,2,3
Please let me know the Pig Script. I am able to get the MIN, AVG, MAX using Pig built in functions, but am not able to get them all in each line.
Thanks
Naga
Use the TOBAG built-in UDF to get your fields into a bag, and then you can use the MIN, AVG, and MAX functions on that bag. You should have no trouble using all three summary functions on a single record.
Here is my simple solution for the problem.
I had the following numbers as input,
temp2.txt
1
2
3
4
5
.
.
16
17
18
19
20
I followed these steps,
1]loaded the data from the file
2]Then grouped all the data
3]Found Average,Minimum,Maximum from the grouped data
4]Then foreach value in loaded data generated data and the minimum , maximum and average values.
The code is as follows,
grunt> data = load '/home/temp2.txt' as (val);
grunt> g = group data all;
grunt> avg = foreach g generate AVG(data.val) as a;
grunt> min = foreach g generate MIN(data.val) as m;
grunt> max = foreach g generate MAX(data.val) as x;
grunt> values = foreach data generate val,min.m,max.x,avg.a;
grunt> dump values;
The following is the output,
Output
(1,1.0,20.0,10.5)
(2,1.0,20.0,10.5)
(3,1.0,20.0,10.5)
(4,1.0,20.0,10.5)
(5,1.0,20.0,10.5)
(6,1.0,20.0,10.5)
(7,1.0,20.0,10.5)
(8,1.0,20.0,10.5)
(9,1.0,20.0,10.5)
(10,1.0,20.0,10.5)
(11,1.0,20.0,10.5)
(12,1.0,20.0,10.5)
(13,1.0,20.0,10.5)
(14,1.0,20.0,10.5)
(15,1.0,20.0,10.5)
(16,1.0,20.0,10.5)
(17,1.0,20.0,10.5)
(18,1.0,20.0,10.5)
(19,1.0,20.0,10.5)
(20,1.0,20.0,10.5)