Unable to extract data with double pipe delimiter in Pig Script - apache-pig

I am trying to extract data which is pipe delimited in Pig. Following is my command
L = LOAD 'entirepath_in_HDFS/b.txt/part-m*' USING PigStorage('||');
Iam getting following error
2016-08-04 23:58:21,122 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1200: Pig script failed to parse:
<line 1, column 4> pig script failed to validate: java.lang.RuntimeException: could not instantiate 'PigStorage' with arguments '[||]'
My input sample file has exactly 5 lines as following
POS_TIBCO||HDFS||POS_LOG||1||7806||2016-07-18||1||993||0
POS_TIBCO||HDFS||POS_LOG||2||7806||2016-07-18||1||0||0
POS_TIBCO||HDFS||POS_LOG||3||7806||2016-07-18||1||0||5
POS_TIBCO||HDFS||POS_LOG||4||7806||2016-07-18||1||0||0
POS_TIBCO||HDFS||POS_LOG||5||7806||2016-07-18||1||0||19.99
I tried several options like using the backslash before delimiter(\||,\|\|) but everything failed. Also, I tried with schema but got the same error.I am using Horton works(HDP2.2.4) and pig (0.14.0).
Any help is appreciated. Please let me know if you need any further details.

I have faced this case, and by checking PigStorage code source, i think PigStorage argument should be parsed into only one character.
So we can use this code instead:
L0 = LOAD 'entirepath_in_HDFS/b.txt/part-m*' USING PigStorage('|');
L = FOREACH L0 GENERATE $0,$2,$4,$6,$8,$10,$12,$14,$16;
Its helpful if you know how many column you have, and it will not affect performance because it's map side.

When you load data using PigStorage, It only expects single character as delimiter.
However if still you want to achieve this you can use MyRegExLoader-
REGISTER '/path/to/piggybank.jar'
A = LOAD '/path/to/dataset' USING org.apache.pig.piggybank.storage.MyRegExLoader('||')
as (movieid:int, title:chararray, genre:chararray);

Related

Load File delimited by double colon :: in pig

Following is a sample dataset delimited by double colon(::).
1::Toy Story (1995)::Animation|Children's|Comedy
I want to extract three fields from above data set as movieID,title and genre. I have written following code for that
movies = LOAD 'location/of/dataset/on/hdfs '
using PigStorage('::')
as
(MovieID:int,title:chararray,genre:chararray);
But i am getting following error
ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1200: Pig script failed to parse:
<file script.pig, line 1, column 9> pig script failed to validate:
java.lang.RuntimeException: could not instantiate 'PigStorage' with arguments '[::]'
Use MyRegExloader: You will need piggybank.jar for this.
REGISTER '/path/to/piggybank.jar'
A = LOAD '/path/to/dataset' USING org.apache.pig.piggybank.storage.MyRegExLoader('([^\\:]+)::([^\\:]+)::([^\\:]+)')
as (movieid:int, title:chararray, genre:chararray);
Output :
(1,Toy Story (1995),Animation|Children's|Comedy)

Error loading pig script

I am having difficulty loading data using apache pig script
cat data15.txt
1,(2,3)
2,(3,4)
grunt>a = load 'nikhil/data15.txt' using PigStorage(',') as (x:int, y:tuple(y1:int,y2:int));
grunt>dump a;
(1,)
(2,)
I know its too late to answer this
The problem is that the tuple and the other field have the same delimiter as ','. Pig fails to do the schema conversion.
you can try something like thisyou need to change the delimiter
1:(5,7,7)
3:(7,9,4)
5:(5,9,7)
and run the pig script as
A = load 'file.txt' using PigStorage(':') as (t1:int,t2:tuple(x:int,y:int,z:int));
dump A;
the output is
(1,(5,7,7))
(3,(7,9,4))
(5,(5,9,7))
you can change the delimiter using sed command in the input file and then load the file.

Apache Pig 0.12.0 on Hue not preprocessing statements as expected

I'm using Hue for PIG scripts on amazon EMR. I am using the declare and default statements as mentioned in the documentation.
I have some %default and %declare statements and it looks like they are
not preprocessed within Hue. Therefore, although the parameters are defined
in my script, the editor keeps popping in a parameter configuration window. If I leave the parameter blank, the job fails with an error.
Sample Script
%declare OUTPUT_FOLDER 'testingOutput01';
ts = LOAD 's3://testbucket1/input/testdata-00000.gz' USING PigStorage('\t');
STORE ts INTO 's3://testbucket1/$OUTPUT_FOLDER' USING PigStorage('\t');
Upon execution, it shows the pop-up window asking for values for OUTPUT_FOLDER. If I leave it blank it fails with the following error:
2015-06-23 20:15:54,908 [main] ERROR org.apache.pig.Main - ERROR 2997:
Encountered IOException. org.apache.pig.tools.parameters.ParseException:
Encountered "<EOF>" at line 1, column 12.
Was expecting one of:
<IDENTIFIER> ...
<OTHER> ...
<LITERAL> ...
<SHELLCMD> ...
Is that the expected behavior? Is this a known issue or am I missing something?
Configuration details:
AMI version:3.7.0
Hadoop distribution:Amazon 2.4.0
Applications:Hive 0.13.1, Pig 0.12.0, Impala 1.2.4, Hue
The same behavior is seen with default instead of declare.
If you need any clarifications then please do comment on this question. I will update it as needed.
Hue does not support %declare with a default statement. It will be fixed with: https://issues.cloudera.org/browse/HUE-2508
The current temporary workaround is to put any value in the popup.

Error in BQ shell Loading Datastore with write_disposition as Write append

1: I tried to load on an existing table [using Datastore file]
2. Bq Shell asked me to add write_disposition to write append to load to existing table
3. If I do the above, throws an error as follows:
load --source_format=DATASTORE_BACKUP --write_disposition=WRITE_append --allow_jagged_rows=None sample_red.t1estchallenge_1 gs://test.appspot.com/bucket/ahFzfnZpcmdpbi1yZWQtdGVzdHJBCxIcX0FFX0RhdGFzdG9yZUFkbWluX09wZXJhdGlvbhiBwLgCDAsSFl9BRV9CYWNrdXBfSW5mb3JtYXRpb24YAQw.entity.backup_info
Error parsing command: flag --allow_jagged_rows=None: ('Non-boolean argument to boolean flag',None)
I tried allow jagged rows = 0 and allow jagged rows = None, nothing works just the same error.
Please advise on this.
UPDATE: As Mosha suggested --allow_jagged_rows=false has worked. It should be before --write_disposition=Write_truncate. But this has led to another issue on encoding. Can anyone say what should be the encoding type for DATASTORE_BACKUP?. I tried both --encoding=UTF-8 and --encoding=ISO-8859.
load --source_format=DATASTORE_BACKUP --allow_jagged_rows=false --write_disposition=WRITE_TRUNCATE sample_red.t1estchallenge_1 gs://test.appspot.com/STAGING/ahFzfnZpcmdpbi1yZWQtdGVzdHJBCxIcX0FFX0RhdGFzdG9yZUFkbWluX09wZXJhdGlvbhiBwLgCDAsSFl9BRV9CYWNrdXBfSW5mb3JtYXRpb24YAQw.entityname.backup_info
Please advise.
You should use "false" (or "true") with boolean arguments, i.e.
--allow_jagged_rows=false

Error using CSVLoader from piggybank

I am trying to use CSVLoader from Piggybank. Below are the first two lines of my code:
register 'piggybank.jar' ;
define CSVLoader org.apache.pig.piggybank.storage.CSVLoader();
It throws the following error:
2013-10-24 14:26:51,427 [main] INFO
org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file
system at: file:///
2013-10-24 14:26:52,029 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1070: Could not resolve org.apache.pig.piggybank.storage.CSVLoader using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.]
Can someone tell me what's going on? I am executing this script from the same folder where my piggybank.jar is located.
I ran into a similar problem when I was experimenting with pig, although it was the XMLLoader for me. The solution that worked for me was to register the entire path to the jar, instead of the relative path. so if the jar is located at /usr/lib/pig/piggybank.jar run the code as follows:
register '/usr/lib/pig/piggybank.jar' ;
define CSVLoader org.apache.pig.piggybank.storage.CSVLoader();
I checked out the code from the url 'http://svn.apache.org/repos/asf/pig/trunk/' and re-built the jar file. IT works fine now. :)
The same is working fine
register 'piggybank.jar' ;
A = load '/xmlinput/demo.xml' using org.apache.pig.piggybank.storage.XMLLoader('property') as (x:chararray);
B = foreach A generate REPLACE(x,'[\n]','') as x;
C = foreach B generate REGEX_EXTRACT_ALL(x,'.(?:)([^<]).(?:)([^<]).*');
D =FOREACH C GENERATE FLATTEN (($0));
STORE D INTO 'xmlcsvpig' USING org.apache.pig.piggybank.storage.CSVExcelStorage();