Pig and Parsing issue - apache-pig

I am trying to figure out the best way to parse key value pair with Pig in a dataset with mixed delimiters as below
My sample dataset is in the format below
a|b|c|k1=v1 k2=v2 k3=v3
The final output which i require here is
k1,v1,k2,v2,k3,v3
I guess one way to do this is to
A = load 'sample' PigStorage('|') as (a1,b1,c1,d1);
B = foreach A generate d1;
and here i get (k1=v1 k2=v2 k3=v3) for B
Is there any way i can further parse this by "" so as to get 3 fields k1=v1,k2=v2 and K3=v3 which can then be further split into k1,v1,k2,v2,k3,v3 using Strsplit and Flatten on "=".
Thanks for the help!
San

If you know beforehand how many key=value pair are in each record, try this:
A = load 'sample' PigStorage('|') as (a1,b1,c1,d1);
B = foreach A generate d1;
C = FOREACH B GENERATE STRSPLIT($0,'=',6); -- 6= no. of key=value pairs
D = FOREACH C GENERATE FLATTEN($0);
DUMP D
output:
(k1,v1, k2,v2, k3,v3)
If you dont know the # of key=value pair, use ' ' as delimiter and remove the unwanted prefix from $0 column.
A = LOAD 'sample' USING PigStorage(' ') as (a:chararray,b:chararray,c:chararray);
B = FOREACH A GENERATE STRSPLIT(SUBSTRING(a, LAST_INDEX_OF(a,'|')+1, (int)SIZE(a)),'=',2),STRSPLIT(b,'=',2),STRSPLIT(c,'=',2);
C = FOREACH B GENERATE FLATTEN($0), FLATTEN($1), FLATTEN($2);
DUMP C;
output:
(k1,v1, k2,v2, k3,v3)

Related

How to loop through tuples in a Bag, Pig

I am new to pig scripting.
I have an input, (A,B,{(XYZ,123,CDE)})
I am looking to loop through the bag inside and print the following records.
(A,B,XYZ)
(A,B,123)
(A,B,CDE)
Can someone please help me out!
Lets say X is your relation and it has (A,B,{(XYZ,123,CDE)}).ToBag converts the expression into bags and FLATTEN unnests the tuples,bag.
Y = FOREACH X GENERATE $0,$1,ToBag(FLATTEN($2));
Solved!!
Let us load below file (Tab separated)
A B {(XYZ,123,CDE)}
input_plus_bag = load '' USING PigStorage() AS (entry1:chararray, entry2:chararray, bag1:bag{(te1:chararray, te2:int, te3:chararray)});
intermed_output = foreach input_plus_bag generate entry1, entry2, FLATTEN(bag1);
Dump intermed_output;
This will give
(A,B,XYZ,123,CDE)
DESCRIBE intermed_output;
intermed_output: {entry1: chararray,entry2: chararray,bag1::te1: chararray,bag1::te2: int,bag1::te3: chararray}
Now perform TOBAG operation
intermed2_output = foreach intermed_output generate entry1, entry2, TOBAG(bag1::te1,bag1::te2,bag1::te3);
DUMP intermed2_output;
This will result in below output:-
(A,B,{(XYZ),(123),(CDE)})
Now final step is FLATTEN the bag
final_output = foreach intermed2_output generate entry1, entry2, FLATTEN($2);
And we have our desired output:-
(A,B,XYZ)
(A,B,123)
(A,B,CDE)

Pig script to count the number of letters in a file

I want to extend the hello world program of hadoop word count to be able to count the number of letters in the input file.
I have written this so far and I'm unable to figure out what is wrong with this code. Any help identifying the issue will be appreciated.
A = load '/tmp/alice.txt';
B = foreach A generate flatten(TOKENIZE((chararray)$0)) as word;
C = filter B by word matches '\\w+';
D = foreach C generate flatten(REGEX_EXTRACT_ALL(word, '([a-zA-Z])')) as letter;
E = group D by letter;
F = foreach E generate COUNT(D), group;
store F into '/tmp/alice_wordcount';
Let me say that I am a PIG newbie, but somehow this query got me interested. I diverged into all kinds of complex stuff like nested foreach, UDFs etc. But in the end, the answer is pretty simple. It's just a correction in one of your pig latin lines as below:
D = foreach C generate flatten(TOKENIZE(REPLACE(word,'','|'), '|')) as letter;
Instead of using regexp_extract_all, I instead opt to REPLACE each letter boundary with a special character ('|' here, though you can use an uncommon sequence also if you like) and then TOKENIZE around that delimiter.
try the following code
Load the data A = load '/tmp/alice.txt';
Split the line into words B = foreach A generate flatten(TOKENIZE((chararray)$0)) as word;
Split words into chars C = foreach B generate flatten(TOKENIZE(REPLACE($0,'','|'),'|')) as letter;
Group the letters D = GROUP C BY letter;
Generate the results with count of each letter E = foreach D generate COUNT(C), group;
Store F into '/tmp/alice_wordcount';

Pig min command and order by

I have data in the form of shell, $917.14,$654.23,2013
I have to find out the minimum value in column $1 and $2
I tried to do a order by these columns by asc order
But the answer is not coming out correct. Can anyone please help?
Refer MIN
A = LOAD 'test1.txt' USING PigStorage(',') as (f1:chararray,f2:float,f3:float,f4:int,f5:int,f6:int);
B = GROUP A ALL;
C = FOREACH B GENERATE MIN(A.f2),MIN(A.f3);
DUMP C;
EDIT1: The data you are loading has '$' in it.You will either have to clean it up and load it to a float field to apply MIN function or load it into a chararray and replace the '$' and then cast it to float and apply the MIN function.
EDIT2: Here is the solution without removing the $ in the original data but handling it in the PigScript.
Input:
shell,$820.48,$11992.70,996,891,1629
shell,$817.12,$2105.57,1087,845,1630
Bharat,$974.48,$5479.10,965,827,1634
Bharat,$943.70,$9162.57,939,895,1635
PigScript
A = LOAD 'test5.txt' USING TextLoader() as (line:chararray);
A1 = FOREACH A GENERATE REPLACE(line,'([^a-zA-Z0-9.,\\s]+)','');
B = FOREACH A1 GENERATE FLATTEN(STRSPLIT($0,','));
B1 = FOREACH B GENERATE $0,(float)$1,(float)$2,(int)$3,(int)$4,(int)$5;
C = GROUP B1 ALL;
D = FOREACH C GENERATE CONCAT('$',(chararray)MIN(B1.$1)),CONCAT('$',(chararray)MIN(B1.$2));
DUMP D;
Output

Change separator and generate output through PiG

I want to generate the below output from the given input. What will be the best way to get that.
Input:-
"column,1A,extra-A1,extra-A2",column2A,column3A
"((column,1B,extra-B1))",column2B,column3B
"column,1C,extra-C1,extra-C2,extra-C3,extra-C4",column2C,column3C
"column,1D,extra-D1",column2D,column3D
Output:-
column,1A,extra-A1,extra-A2|column2A|column3A
((column,1B,extra-B1))|column2B|column3B
column,1C,extra-C1,extra-C2,extra-C3,extra-C4|column2C|column3C
column,1D,extra-D1|column2D|column3D
I am able to resolve it using below, let me know if you have any better option
Input:-
"column,1A,extra-A1,extra-A2",column2A,column3A
"((column,1B,extra-B1))",column2B,column3B
"column,1C,extra-C1,extra-C2,extra-C3,extra-C4",column2C,column3C
"column,1D,extra-D1",column2D,column3D
Pig Script:-
A = LOAD '/home/hduser/pig_ex1/sample1.txt' AS line;
B = FOREACH A GENERATE SUBSTRING(line,1,(LAST_INDEX_OF(line,'"'))) AS firstcol, SUBSTRING(line,(LAST_INDEX_OF(line,'"')+2),(INT) SIZE(line)) as lastcol;
C = FOREACH B GENERATE firstcol, FLATTEN(STRSPLIT(lastcol,'\\,',2)) AS (secondcol,thirdcol);
D = FOREACH C GENERATE CONCAT(firstcol,'|',secondcol,'|',thirdcol);
Output:-
(column,1A,extra-A1,extra-A2|column2A|column3A)
(((column,1B,extra-B1))|column2B|column3B)
(column,1C,extra-C1,extra-C2,extra-C3,extra-C4|column2C|column3C)
(column,1D,extra-D1|column2D|column3D)
I am using Regular Expression:
Let be try this code:
a = LOAD '/home/hduser/pig_ex1/sample1.txt' as line;
b = FOREACH a GENERATE FLATTEN(REGEX_EXTRACT_ALL(line,'["](.*)["][,](.*)[,](.*)')) AS (f1,f2,f3);
c = FOREACH b GENERATE CONCAT(f1,'|',f2,'|',f3);
dump c;
Try org.apache.pig.piggybank.storage.CSVExcelStorage(','); from piggybank jar.

Merge two lines in Pig

I would like to write a pig script for below query.
Input is:
ABC,DEF,,
,,GHI,JKL
MNO,PQR,,
,,STU,VWX
Output should be:
ABC,DEF,GHI,JKL
MNO,PQR,STU,VWX
Could anyone please help me?
It will be difficult to solve this problem using native pig. One option could be download the datafu-1.2.0.jar library and try the below approach.
input.txt
ABC,DEF,,
,,GHI,JKL
MNO,PQR,,
,,STU,VWX
PigScript:
REGISTER /tmp/datafu-1.2.0.jar;
DEFINE BagSplit datafu.pig.bags.BagSplit();
A = LOAD 'input.txt' USING PigStorage(',') AS(f1,f2,f3,f4);
B = GROUP A ALL;
C = FOREACH B GENERATE FLATTEN(BagSplit(2,$1)) AS mybag;
D = FOREACH C GENERATE FLATTEN(STRSPLIT(REPLACE(BagToString(mybag),'_null_null_null_null',''),'_',4));
E = FOREACH D GENERATE $2,$3,$0,$1;
DUMP E;
Output:
(MNO,PQR,STU,VWX)
(ABC,DEF,GHI,JKL)
Note:
Based on the above input format, my assumption will be 1st row last two cols will be null, 2nd row first two cols will be null, similarly for 3rd and 4th row also