how to re-name alias in PIG - apache-pig

I just started learning pig and I have a problem with re-naming an alias. Want I want to do is read a file, filter it, and then join it by itself. What I did is this:
register s3n://uw-cse-344-oregon.aws.amazon.com/myudfs.jar
raw = LOAD 's3n://uw-cse-344-oregon.aws.amazon.com/cse344-test-file' USING TextLoader as (line:chararray);
ntriples = foreach raw generate FLATTEN(myudfs.RDFSplit3(line)) as (subject:chararray,predicate:chararray,object:chararray);
ntriples2 = foreach raw generate FLATTEN(myudfs.RDFSplit3(line)) as (subject2:chararray,predicate2:chararray,object2:chararray);
X = FILTER ntriples BY (subject matches '.*business.*');
X2 = FILTER ntriples2 BY (subject2 matches '.*business.*');
joined= join X by subject, X2 by subject2;
joined = DISTINCT joined;
store joined into '/user/hadoop/join-results' using PigStorage();
as you can see I read and filter the file twice in order two have two different alias for each column. How can I simply copy the filtered collection and assign it new aliases? This operation was supposed to take 18 mins, but took 1.5 hour.

I found the answer:
X = FILTER ntriples BY (subject matches '.*rdfabout\\.com.*') PARALLEL 50;
y = foreach X generate subject as subject2, predicate as predicate2, object as object2 PARALLEL 50;
This is how you make copy of x and changing the aliases.

Related

Basic statistics with Apache Pig

I am trying to characterize fractions of rows having certain properties using Apache Pig.
For example, if the data looks like:
a,15
a,16
a,17
b,3
b,16
I would like to get:
a,0.6
b,0.4
I am trying to do the following:
A = LOAD 'my file' USING PigStorage(',');
total = FOREACH (GROUP A ALL) GENERATE COUNT(A);
which gives me total = (5), but then when I attempt to use this 'total':
fractions = FOREACH (GROUP A by $0) GENERATE COUNT(A)/total;
I get an error.
Clearly COUNT() returns some kind of projection and both projections (in computing total and fractions) should be consistent. Is there a way to make this work? Or perhaps just to cast total to be a number and avoid this projection consistency requirement?
One more way to do the same:
test = LOAD 'test.txt' USING PigStorage(',') AS (one:chararray,two:int);
B = GROUP test by $0;
C = FOREACH B GENERATE group, COUNT(test.$0);
D = GROUP test ALL;
E = FOREACH D GENERATE group,COUNT(test.$0);
F = CROSS C,E;
G = FOREACH F GENERATE $0,$1,$3,(double)($1*100/$3);
Output:
(a,3,5,0.6)
(b,2,5,0.4)
You will have to project and cast it to double:
total = FOREACH (GROUP A ALL) GENERATE COUNT(A);
rows = FOREACH (GROUP A by $0) GENERATE group,COUNT(A);
fractions = FOREACH rows GENERATE rows.$0,(double)rows.$1/(double)total.$0;
For some reason the following modification of what #inquisitive-mind suggested works:
total = FOREACH (GROUP A ALL) GENERATE COUNT(A);
rows = FOREACH (GROUP A by $0) GENERATE group as colname, COUNT(A) as cnt;
fractions = FOREACH rows GENERATE colname, cnt/(double)total.$0;

I have a seemingly simple Pig generate and then filter issue

I am trying to run a simple Pig script on a simple csv file and I can not get FILTER to do what I want. I have a test.csv file that looks like this:
john,12,44,,0
bob,14,56,5,7
dave,13,40,5,5
jill,8,,,6
Here is my script that does not work:
people = LOAD 'hdfs:/whatever/test.csv' using PigStorage(',');
data = FOREACH people GENERATE $0 AS name:chararray, $1 AS first:int, $4 AS second:int;
filtered = FILTER data BY first == 13;
DUMP filtered;
When I dump data, everything looks good. I get the name and the first and last integer as expected. When I describe the data, everything looks good:
data: {name: bytearray,first: int,second: int}
When I try and filter out data by the first value being 13, I get nothing. DUMP filtered simply returns nothing. Oddly enough, if I change it to first > 13, then all "rows" will print out.
However, this script works:
peopletwo = LOAD 'hdfs:/whatever/test.csv' using PigStorage(',') AS (f1:chararray,f2:int,f3:int,f4:int,f5:int);
datatwo = FOREACH peopletwo GENERATE $0 AS name:chararray, $1 AS first:int, $4 AS second:int;
filteredtwo = FILTER datatwo BY first == 13;
DUMP filteredtwo;
What is the difference between filteredtwo and filtered (or data and datatwo for that matter)? I want to know why the new relation obtained using GENERATE (i.e. data) won't filter in the first script as one would expect.
Specify the datatype in the load itself.See below
people = LOAD 'test5.csv' USING PigStorage(',') as (f1:chararray,f2:int,f3:int,f4:int,f5:int);
filtered = FILTER people BY f2 == 13;
DUMP filtered;
Output
Changing the filter to use > gives
filtered = FILTER people BY f2 > 13;
Output
EDIT
When converting from bytearray you will have to explicitly cast the value of the fields in the FOREACH.This works.
people = LOAD 'test5.csv' USING PigStorage(',');
data = FOREACH people GENERATE $0 AS name:chararray,(int)$1 AS f1,(int)$4 AS f2;
filtered = FILTER data BY f1 == 13;
DUMP filtered;

Load multiple Excel sheets using For loop with QlikView

I want to load data from two different Excel files, and use them in the same table in QlikView.
I have two files (DIVISION.xls and REGION.xls), and I'm using the following code:
let tt = 'DIVISION$' and 'REGION$';
FOR Each db_schema in 'DIVISION.xls','REGION.xls'
FOR Each v_db in $(tt)
div_reg_table:
LOAD *
FROM $(db_schema)
(biff, embedded labels, table is $(v_db));
NEXT
NEXT
This code works fine, and does not show any error, but I don't get any data, and I don't see my new table (div_reg_table).
Can you help me?
The main reason that your code does not load any data is due to the form of the tt variable.
When your inner loop executes, it evaluates tt (denoted by $(tt)), which then results in the evaluation of:
'DIVISION$' AND 'REGION$'
Which results in null, since these are just two strings.
If you change your statement slightly, from LET to SET and remove the AND, then the inner loop will work. For example:
SET tt = 'DIVISION$', 'REGION$';
However, this also now means that your inner loop will be executed for each value in tt for both workbooks. This means that unless you have a REGION sheet in your DIVISION workbook, then the load will fail.
To avoid this, you may wish to restructure your script slightly so that you have a control table which details the files and tables to load. An example I prepared is shown below:
FileList:
LOAD * INLINE [
FileName, TableName
REGION.XLS, REGION$
DIVISION.XLS, DIVISION$
];
FOR i = 0 TO NoOfRows('FileList') - 1
LET FileName = peek('FileName', i, 'FileList');
LET TableName = peek('TableName', i, 'FileList');
div_reg_table:
LOAD
*
FROM '$(FileName)'
(biff, embedded labels, table is '$(TableName)');
NEXT
If you have just two files DIVISION.XLS and REGION.XLS then it may be worth just using two separate loads one after the other, and removing the for loop entirely. For example:
div_reg_table:
LOAD
*
FROM [DIVISION.XLS]
(biff, embedded labels, table is DIVISION$)
WHERE DivisionName = 'AA';
LOAD
*
FROM [REGION.XLS]
(biff, embedded labels, table is REGION$)
WHERE RegionName = 'BB';
Don't you need to make the noofrows('Filelist') test be 1 less than the answer.
noofrows('filelist') will evaluate to 2 so you will get a loop step for 0,1 and 2. So 2 will return a null which will cause it to fail.
I did it like this:
FileList:
LOAD * INLINE [
FileName, TableName
REGION.XLS, REGION
DIVISION.XLS, DIVISION
];
let vNo= NoOfRows('FileList')-1;
FOR i = 0 TO $(vNo)
LET FileName = peek('FileName', i, 'FileList');
LET TableName = peek('TableName', i, 'FileList');
div_reg_table:
LOAD
*,
$(i) as I,
FileBaseName() as SOURCE
FROM '$(FileName)'
(biff, embedded labels, table is '$(TableName)');
NEXT

How to extract keys from map?

How do I extract all keys from a map field?
I have a bag of tuples where one of the fields is a map that contains HTTP headers (and their values). I want to create a set of all possible keys (in my dataset) for a HTTP header and count how many times I've seen them.
Ideally, something like:
A = LOAD ...
B = FOREACH A GENERATE KEYS(http_headers)
C = GROUP FLATTEN(B) BY $0
D = FOREACH C GENERATE group, COUNT($0)
(didn't test it but it illustrates the idea..)
How do I do something like this? If I can extract a bag of keys from a map it would actually solve it. I just couldn't find any function like this in piglatin's documentation.
Yes there is a command in Pig to accomplish this.
Example:
/* data */
[a#1,b#2,c#3]
[green#sam,eggs#I,ham#am]
A = load 'data' as (M:[]);
B = foreach A generate KEYSET($0);
dump B
Output:
({(b),(c),(a)})
({(ham),(eggs),(green)})

Getting a count of a particular string

How can I count the number of occurances of a particular string,say 'Y' , for each individual row and do calculations on that count after that. For ex. how can I find the number of 'Y' for each 'FMID' and do calculations on that count for each FMID ?Dataset Screenshot
You could use TOKENIZE built-in function which converts row into a BAG and than use nested filtering in the foreach to get a BAG that contains only word you are interested in, on which you can use COUNT. See FOREACH description
For example:
inpt = load '....' as (line : string);
row_bags = foreach inpt generate line, TOKENIZE(line) as word;
cnt = foreach row_bags {
match_1 = filter word by 'Y';
match_2 = filter word by 'X';
generate line, COUNT(match_1) as count_1, COUNT(match_2) as count_2;
};
dump cnt;
Using some functions from DataFu library you could get count for each string in the BAG word.
Here's the simplest way I can think of solving your problem:
define Transpose datafu.pig.util.TransposeTupleToBag();
data = LOAD 'input' USING PigStorage(',') AS
(fmid:int, field1:chararray, field2:chararray,
field3:chararray, field4:chararray);
data2 = FOREACH data GENERATE fmid, Transpose($1..) as fields;
data2 = FOREACH data2 {
y_fields = FILTER fields BY value == 'Y';
GENERATE fmid, SIZE(y_fields) as y_cnt;
}
I'm not really certain about the data schema you're working with. I'm going to assume you have a relation in Pig consisting of a sequence of tuples. I'm also going to assume you have a lot of fields, making it a pain to reference each individually.
I'll walk through this example piece by piece to explain. Without loss of generality, I will use this data below for my example:
data = LOAD 'input' USING PigStorage(',') AS (fmid:int, field1:chararray, field2:chararray, field3:chararray, field4:chararray);
Now that we've loaded the data, we want to transpose each tuple to a bag, because once it is in a bag we can perform counts on the items within it more easily. We'll use TransposeTupleToBag from the DataFu library:
define Transpose datafu.pig.util.TransposeTupleToBag();
data2 = FOREACH data GENERATE fmid, Transpose($1..) as fields;
Note that you need at least Pig 0.11 to use this UDF. Note the use of $1.., which is known as a project-range expression in Pig. If you have many fields this is really convenient.
If you were to dump data2 at this point you would get this:
(1000,{(field1,N),(field2,N),(field3,N),(field4,N)})
(2000,{(field1,N),(field2,Y),(field3,N),(field4,N)})
(3000,{(field1,Y),(field2,Y),(field3,N),(field4,N)})
(4000,{(field1,Y),(field2,Y),(field3,Y),(field4,Y)})
What we've done is taken the fields from the tuple after the 0th element (fmid), and tranposed these into a bag where each tuple has a key and value field.
Now that we have a bag we can do a simple filter and count:
data2 = FOREACH data2 {
y_fields = FILTER fields BY value == 'Y';
GENERATE fmid, SIZE(y_fields) as y_cnt;
}
Now if you dump data2 you get the expected counts reflecting the number of Y values in the tuple.
(1000,0)
(2000,1)
(3000,2)
(4000,4)
Here is the full source code for my example as a unit test, which you can put directly in the DataFu unit tests to try out:
/**
register $JAR_PATH
define Transpose datafu.pig.util.TransposeTupleToBag();
data = LOAD 'input' USING PigStorage(',') AS (fmid:int, field1:chararray, field2:chararray, field3:chararray, field4:chararray);
data2 = FOREACH data GENERATE fmid, Transpose($1..) as fields;
dump data2;
data2 = FOREACH data2 {
y_fields = FILTER fields BY value == 'Y';
GENERATE fmid, SIZE(y_fields) as y_cnt;
}
dump data2;
STORE data2 INTO 'output';
*/
#Multiline
private String example;
#Test
public void example() throws Exception
{
PigTest test = createPigTestFromString(example);
writeLinesToFile("input",
"1000,N,N,N,N",
"2000,N,Y,N,N",
"3000,Y,Y,N,N",
"4000,Y,Y,Y,Y");
test.runScript();
super.getLinesForAlias(test, "data2");
}