piglatin positional notation, how to specify an interval - apache-pig

Lets say I have a very wide data source:
big_thing = LOAD 'some_path' using MySpecialLoader;
Now I want to generate some smaller thing composed of a subset of big_thing's columns.
smaller_thing = FOREACH big_thing GENERATE
$21,$22,$23 ...... $257;
Is there a way to achieve this without having to write out all the columns?
I'm assuming yes but my searches aren't coming up with much, I think I'm just using the wrong terminology.
EDIT:
So it looks like my question is being very misunderstood. Since I'm a Python person I'll give a python analogy.
Say I have an array l1 which is made up of arrays. So it looks like a grid right? Now I want the array l2 to be a subset of 'l1such thatl2' contains a bunch of columns from l1. I would do something like this:
l2 = [[l[a],l[b],l[c],l[d]] for l in l1]
# a,b,c,d are just some constants.
In pig this is equivalent to something like:
smaller_thing = FOREACH big_thing GENERATE
$1,$22,$3,$21;
But I have a heck of a lot of columns. And the columns I'm interested in are all sequential and there are a lot of those. Then in python I would do this:
l2 = [l[x:y] for l in l2]
#again, x and y are constants, eg x=20, y=180000000. See, lots of stuff I dont want to type out
My question is what is the pig equivalent to this?
smaller_thing = FOREACH big_thing GENERATE ?????
And what about stuff like this:
Python:
l2 = [l[x:y]+l[a:b]+[l[b],l[c],l[d]] for l in l2]
Pig:
smaller_thing = FOREACH big_thing GENERATE ?????

Yes, you can simply load the dataset without columns.
But if you load the data with column names will help you to identify the column details in future scripts.
UDF can help you to perform your query,
For example,
REGISTER UDF\path;
a = load 'data' as (a1);
b = foreach a generate UDF.Func(a1,2,4);
UDF:
public class col_gen extends EvalFunc<String>
{
#Override
public String exec(Tuple tuple) throws IOException {
String data = tuple.get(0).toString();
int x = (int)tuple.get(1);
int y = (int)tuple.get(2);
String[] data3 = data.split(",");
String data2 = data3[x]+",";
x = x+1;
while(x <= y)
{
data2 += data3[x]+",";
x++;
}
data2 = data2.substring(0, data2.length()-1);
return data2;
}
}

The answer can be found in this post: http://blog.cloudera.com/blog/2012/08/process-a-million-songs-with-apache-pig/
Distanced = FOREACH Different GENERATE artistLat..songPreview, etc;
The .. says use everything from artistLat to songPreview.
The same thing can be done with positional notation. eg $1..$6

Related

difference between two lists that include duplicates

I have a problem with two lists which contain duplicates
a = [1,1,2,3,4,4]
b = [1,2,3,4]
I would like to be able to extract the differences between the two lists ie.
c = [1,4]
but if I do c = a-b I get c =[]
It should be trivial but I can't find out :(
I tried also to parse the biggest list and remove items from it when I find them in the smallest list but I can't update lists on the fly, it does not work either
has anyone got an idea ?
thanks
You see an empty c as a result, because removing e.g. 1 removes all elements that are equal 1.
groovy:000> [1,1,1,1,1,2] - 1
===> [2]
What you need instead is to remove each occurrence of specific value separately. For that, you can use Groovy's Collection.removeElement(n) that removes a single element that matches the value. You can do it in a regular for-loop manner, or you can use another Groovy's collection method, e.g. inject to reduce a copy of a by removing each occurrence separately.
def c = b.inject([*a]) { acc, val -> acc.removeElement(val); acc }
assert c == [1,4]
Keep in mind, that inject method receives a copy of the a list (expression [*a] creates a new list from the a list elements.) Otherwise, acc.removeElement() would modify an existing a list. The inject method is an equivalent of a popular reduce or fold operation. Each iteration from this example could be visualized as:
--inject starts--
acc = [1,1,2,3,4,4]; val = 1; acc.removeElement(1) -> return [1,2,3,4,4]
acc = [1,2,3,4,4]; val = 2; acc.removeElement(2) -> return [1,3,4,4]
acc = [1,3,4,4]; val = 3; acc.removeElement(3) -> return [1,4,4]
acc = [1,4,4]; val = 4; acc.removeElement(4) -> return [1,4]
-- inject ends -->
PS: Kudos to almighty tim_yates who recommended improvements to that answer. Thanks, Tim!
the most readable that comes to my mind is:
a = [1,1,2,3,4,4]
b = [1,2,3,4]
c = a.clone()
b.each {c.removeElement(it)}
if you use this frequently you could add a method to the List metaClass:
List.metaClass.removeElements = { values -> values.each { delegate.removeElement(it) } }
a = [1,1,2,3,4,4]
b = [1,2,3,4]
c = a.clone()
c.removeElements(b)

generating DataFrames in for loop in Scala Spark cause out of memory

I'm generating small dataFrames in for loop. At each round of for loop, I pass the generated dataFrame to a function which returns double. This simple process (which I thought could be easily taken care of by garbage collector) blow up my memory. When I look at Spark UI at each round of for loop it adds a new "SQL{1-500}" (my loop runs 500 times). My question is how to drop this sql object before generating a new one?
my code is something like this:
Seq.fill(500){
val data = (1 to 1000).map(_=>Random.nextInt(1000))
val dataframe = createDataFrame(data)
myFunction(dataframe)
dataframe.unpersist()
}
def myFunction(df: DataFrame)={
df.count()
}
I tried to solve this problem by dataframe.unpersist() and sqlContext.clearCache() but neither of them worked.
You have two places where I suspect something fishy is happening:
in the definition of myFunction : you really need to put the = before the body of the definition. I had typos like that compile, but produce really weird errors (note I changed your myFunction for debugging purposes)
it is better to fill your Seq with something you know and then apply foreach or some such
(You also need to replace random.nexInt with Random.nextInt, and also, you can only create a DataFrame from a Seq of a type that is a subtype of Product, such as tuple, and need to use sqlContext to use createDataFrame)
This code works with no memory issues:
Seq.fill(500)(0).foreach{ i =>
val data = {1 to 1000}.map(_.toDouble).toList.zipWithIndex
val dataframe = sqlContext.createDataFrame(data)
myFunction(dataframe)
}
def myFunction(df: DataFrame) = {
println(df.count())
}
Edit: parallelizing the computation (across 10 cores) and returning the RDD of counts:
sc.parallelize(Seq.fill(500)(0), 10).map{ i =>
val data = {1 to 1000}.map(_.toDouble).toList.zipWithIndex
val dataframe = sqlContext.createDataFrame(data)
myFunction(dataframe)
}
def myFunction(df: DataFrame) = {
df.count()
}
Edit 2: the difference between declaring function myFunction with = and without = is that the first is (a usual) function definition, while the other is procedure definition and is only used for methods that return Unit. See explanation. Here is this point illustrated in Spark-shell:
scala> def myf(df:DataFrame) = df.count()
myf: (df: org.apache.spark.sql.DataFrame)Long
scala> def myf2(df:DataFrame) { df.count() }
myf2: (df: org.apache.spark.sql.DataFrame)Unit

How can I generate schema from text file? (Hadoop-Pig)

Somehow i got filename.log which looks like for example (tab separated)
Name:Peter Age:18
Name:Tom Age:25
Name:Jason Age:35
because the value of key column may differ i cannot define schema when i load text like
a = load 'filename.log' as (Name:chararray,Age:int);
Neither do i want to call column by position like
b = foreach a generate $0,$1;
What I want to do is, from only that filename.log, to make it possible to call each value by key, for example
a = load 'filename.log' using PigStorage('\t');
b = group b by Name;
c = foreach b generate group, COUNT(b);
dump c;
for that purpose, i wrote some Java UDF which seperate key:value and get value for every field in tuple as below
public class SPLITALLGETCOL2 extends EvalFunc<Tuple>{
#Override
public Tuple exec(Tuple input){
TupleFactory mTupleFactory = TupleFactory.getInstance();
ArrayList<String> mProtoTuple = new ArrayList<String>();
Tuple output;
String target=input.toString().substring(1, input.toString().length()-1);
String[] tokenized=target.split(",");
try{
for(int i=0;i<tokenized.length;i++){
mProtoTuple.add(tokenized[i].split(":")[1]);
}
output = mTupleFactory.newTupleNoCopy(mProtoTuple);
return output;
}catch(Exception e){
output = mTupleFactory.newTupleNoCopy(mProtoTuple);
return output;
}
}
}
How should I alter this method to get what I want? or How should I write other UDF to get there?
Whatever you do, don't use a tuple to store the output. Tuples are intended to store a fixed number of fields, where you know what every field contains. Since you don't know that the keys will be in Name,Age form (or even exist, or that there won't be more) you should use a bag. Bags are unordered sets of tuples. They can have any number of tuples in them as long as the tuples have the same schema. These are all valid bags for the schema B: {T:(key:chararray, value:chararray)}:
{(Name,Foo),(Age,Bar)}
{(Age,25),(Name,Jim)}
{(Name,Bob)}
{(Age,30),(Name,Roger),(Hair Color,Brown)}
{(Hair Color,),(Name,Victor)} -- Note the Null value for Hair Color
However, it sounds like you really want a map:
myudf.py
#outputSchema('M:map[]')
def mapize(the_input):
out = {}
for kv in the_input.split(' '):
k, v = kv.split(':')
out[k] = v
return out
myscript.pig
register '../myudf.py' using jython as myudf ;
A = LOAD 'filename.log' AS (total:chararray) ;
B = FOREACH A GENERATE myudf.mapize(total) ;
-- Sample usage, grouping by the name key.
C = GROUP B BY M#'Name' ;
Using the # operator you can pull out all values from the map using the key you give. You can read more about maps here.

Can I pass parameters to UDFs in Pig script?

I am relatively new to PigScript. I would like to know if there is a way of passing parameters to Java UDFs in Pig?
Here is the scenario:
I have a log file which have different columns (each representing a Primary Key in another table). My task is to get the count of distinct primary key values in the selected column.
I have written a Pig script which does the job of getting the distinct primary keys and counting them.
However, I am now supposed to write a new UDF for each column. Is there a better way to do this? Like if I can pass a row number as parameter to UDF, it avoids the need for me writing multiple UDFs.
The way to do it is by using DEFINE and the constructor of the UDF. So here is an example of a customer "splitter":
REGISTER com.sample.MyUDFs.jar;
DEFINE CommaSplitter com.sample.MySplitter(',');
B = FOREACH A GENERATE f1, CommaSplitter(f2);
Hopefully that conveys the idea.
To pass parameters you do the following in your pigscript:
UDF(document, '$param1', '$param2', '$param3')
edit: Not sure if those params need to be wrappedin ' ' or not
while in your UDF you do:
public class UDF extends EvalFunc<Boolean> {
public Boolean exec(Tuple input) throws IOException {
if (input == null || input.size() == 0)
return false;
FileSystem fs = FileSystem.get(UDFContext.getUDFContext().getJobConf());
String var1 = input.get(1).toString();
InputStream var1In = fs.open(new Path(var1));
String var2 = input.get(2).toString();
InputStream var2In = fs.open(new Path(var2));
String var3 = input.get(3).toString();
InputStream var3In = fs.open(new Path(var3));
return doyourthing(input.get(0).toString());
}
}
for example
Yes, you can pass any parameter in the Tuple parameter input of your UDF:
exec(Tuple input)
and access it using
input.get(index)

overriding a method in R, using NextMethod

how dows this work in R...
I am using a package (zoo 1.6-4) that defines a S3 class for time series sets.
I am writing a derived class where I want to override a few methods and can't get past this one:[.zoo!
in my derived class rows are indexed by timestamp, like in zoo, but differently from zoo, I allow only POSIXct values in the index. my users will be selecting columns all of the time, while slicing series only occasionally so I want to offer obj[name] instead of obj[, name].
my objects have class c("delftfews", "zoo").
but...
how do I override a method?
I tried this:
"[.delftfews" <- function(x, i, j, drop=TRUE, ...) {
if (missing(i)) return(NextMethod())
if (all(class(i) == "character") && missing(j)) {
return(NextMethod('[', x=x, i=1:NROW(x), j=i, drop=drop, ...))
}
NextMethod()
}
but I get this error: Error in rval[i, j, drop = drop., ...] : incorrect number of dimensions.
I have solved by editing the source from zoo: I removed those ..., but I don't get why that works. anybody can explain what is going on here?
The problem is that with the above definition of [.delftfews this code:
library(zoo)
z <- structure(zoo(cbind(a = 1:3, b = 4:6)), class = c("delftfews", "zoo"))
z["a"]
# generates this call: `[.zoo`(x = 1:6, i = 1:3, j = "a", drop = TRUE, z, "a")
Your code does work as is if you write the call like this:
z[j = "a"]
# generates this call: `[.zoo`(x = z, j = "a")
I think what you want is to change the relevant line in [.delftfews to this:
return(NextMethod(.Generic, object = x, i = 1:NROW(x), drop = drop))
# z["a"] now generates this call: `[.zoo`(x = z, i = 1:3, j = "a", drop = TRUE)
A point of clarification: allowing only POSIXct index values does not allow indexing columns by name only. I'm not sure how you arrived at that conclusion.
You're overriding zoo correctly, but I think you misunderstand NextMethod. The error is caused by if (missing(i)) return(NextMethod()), which calls [.zoo if i is missing, but [.zoo requires i because zoo's internal data structure is a matrix. Something like this should work:
if (missing(i)) i <- 1:NROW(x)
though I'm not sure if you have to explicitly pass this new i to NextMethod...
You may be interested in the xts package, if you haven't already taken a look at it.