ERROR 1066: Unable to open iterator for alias - apache-pig

I am getting an error Unable to open iterator for alias for a simple LOAD and DUMP operation with Pig. I already took a look at the answers at:
ERROR 1066: Unable to open iterator for alias - Pig
But they don't help me.
My environment:
OS: Windows 7
Pig version: 0.13.0
Mode: Local
It shows in the error that the exception is 'Caused by' a failure to change
permission of a file in the TMP directory. But when I checked the TMP directory, there's no such file (probably it got deleted after the command is finished?).
Logs below (with -v and -w options) :
'D:\H\HADOOP-2.6.0\bin\hadoop-config.cmd' is not recognized as an internal or external command,
operable program or batch file.
15/01/24 09:20:22 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
15/01/24 09:20:22 INFO pig.ExecTypeProvider: Picked LOCAL as the ExecType
2015-01-24 09:20:22,909 [main] INFO org.apache.pig.Main - Apache Pig version 0.13.0 (r1606446) compiled Jun 29 2014, 02:29:34
2015-01-24 09:20:22,909 [main] INFO org.apache.pig.Main - Logging error messages to: d:\Pig\pig-0.13.0\bin\
2015-01-24 09:20:24,267 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file C:\Users\Venkat/.pigbootup not found
2015-01-24 09:20:24,438 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:///
2015-01-24 09:20:26,205 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-01-24 09:20:26,236 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias a
2015-01-24 09:20:26,236 [main] ERROR org.apache.pig.tools.grunt.Grunt - org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias a
at org.apache.pig.PigServer.openIterator(PigServer.java:912)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:752)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:228)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:203)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
at org.apache.pig.Main.run(Main.java:479)
at org.apache.pig.Main.main(Main.java:156)
Caused by: org.apache.pig.backend.datastorage.DataStorageException: ERROR 0: java.io.IOException: Failed to set permissions of path: \tmp\temp946561981 to 0700
at org.apache.pig.impl.io.FileLocalizer.relativeRoot(FileLocalizer.java:484)
at org.apache.pig.impl.io.FileLocalizer.getTemporaryPath(FileLocalizer.java:515)
at org.apache.pig.impl.io.FileLocalizer.getTemporaryPath(FileLocalizer.java:511)
at org.apache.pig.PigServer.openIterator(PigServer.java:887)
... 7 more
Caused by: java.io.IOException: Failed to set permissions of path: \tmp\temp946561981 to 0700
at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:689)
at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:662)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:286)
at org.apache.pig.backend.hadoop.datastorage.HPath.setPermission(HPath.java:122)
at org.apache.pig.impl.io.FileLocalizer.createRelativeRoot(FileLocalizer.java:495)
at org.apache.pig.impl.io.FileLocalizer.relativeRoot(FileLocalizer.java:481)
... 10 more
Details also at logfile: D:\Pig\pig-0.13.0\bin\data-1.txt1422071424329.log
Pig Stack Trace
ERROR 1066: Unable to open iterator for alias a
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias a
at org.apache.pig.PigServer.openIterator(PigServer.java:912)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:752)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:228)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:203)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
at org.apache.pig.Main.run(Main.java:479)
at org.apache.pig.Main.main(Main.java:156)
Caused by: org.apache.pig.backend.datastorage.DataStorageException: ERROR 0: java.io.IOException: Failed to set permissions of path: \tmp\temp946561981 to 0700
at org.apache.pig.impl.io.FileLocalizer.relativeRoot(FileLocalizer.java:484)
at org.apache.pig.impl.io.FileLocalizer.getTemporaryPath(FileLocalizer.java:515)
at org.apache.pig.impl.io.FileLocalizer.getTemporaryPath(FileLocalizer.java:511)
at org.apache.pig.PigServer.openIterator(PigServer.java:887)
... 7 more
Caused by: java.io.IOException: Failed to set permissions of path: \tmp\temp946561981 to 0700
at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:689)
at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:662)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:286)
at org.apache.pig.backend.hadoop.datastorage.HPath.setPermission(HPath.java:122)
at org.apache.pig.impl.io.FileLocalizer.createRelativeRoot(FileLocalizer.java:495)
at org.apache.pig.impl.io.FileLocalizer.relativeRoot(FileLocalizer.java:481)
... 10 more
Pig file contents:
a = LOAD 's.csv' AS (NAME:chararray,COUNTRY:chararray,YEAR:int,SPORT:chararray,GOLD:int,SILVER:int,BRONZE:int,TOTAL:int);
DUMP a;
Contents of s.csv:
Yang Yilin China 2008 Gymnastics 1 0 2 3
Leisel Jones Australia 2000 Swimming 0 2 0 2
Is there anything wrong in the syntax of the LOAD statement?
Are there any environment variables that need to be set specifically other
than JAVA and JAVA_HOME?

first check your hadoop permission status if its root change it too user.
$sudo chown -R testuser:testuser /(path of hadoop folder)
permission problem will be solve.
I hope this will work

Generally we will get this error as "ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias" due to incorrect path or not able to access the file path.
check the file location is correct or not.
check are you able to access the file.
I got this issue with hadoop namenode safemode is on.
So I did below command to OFF safemode:
sudo -u hdfs hadoop dfsadmin -safemode leave
Then it works for me.

Related

Hiveserver2 does not start after installing HDP 2.6.4.0-91 using cloudbreak on AWS

Hiveserver2 does not start after installing HDP 2.6.4.0-91 using cloudbreak on AWS.
Start the hiveserver2 in the Ambari UI and check the contents of /var/log/hive/hiveserver2.log.
Below is the error log.
Any help would be appreciated.
Contents of hiveserver2.log
2018-03-08 04:41:53,345 WARN [main-EventThread]: server.HiveServer2 (HiveServer2.java:process(343)) - This instance of HiveServer2 has been removed from the list of server instances available for dynamic service discovery. The last client session has ended - will shutdown now.
2018-03-08 04:41:53,347 INFO [main]: zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x16203aad5af0040 closed
2018-03-08 04:41:53,347 INFO [main]: server.HiveServer2 (HiveServer2.java:removeServerInstanceFromZooKeeper(361)) - Server instance removed from ZooKeeper.
2018-03-08 04:41:53,348 INFO [main-EventThread]: server.HiveServer2 (HiveServer2.java:stop(405)) - Shutting down HiveServer2
2018-03-08 04:41:53,348 INFO [main-EventThread]: server.HiveServer2 (HiveServer2.java:removeServerInstanceFromZooKeeper(361)) - Server instance removed from ZooKeeper.
2018-03-08 04:41:53,348 INFO [main-EventThread]: zookeeper.ClientCnxn (ClientCnxn.java:run(524)) - EventThread shut down
2018-03-08 04:41:53,348 WARN [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(508)) - Error starting HiveServer2 on attempt 1, will retry in 60 seconds
org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown. Application application_1520480101488_0046 failed 2 times due to AM Container for appattempt_1520480101488_0046_000002 exited with exitCode: -1000
For more detailed output, check the application tracking page: http://ip-10-0-91-7.ap-northeast-2.compute.internal:8088/cluster/app/application_1520480101488_0046 Then click on links to logs of each attempt.
Diagnostics: ExitCodeException exitCode=2: tar: Removing leading `/' from member names
tar: Skipping to next header
gzip: /hadoopfs/fs1/yarn/nodemanager/filecache/60_tmp/tmp_tez.tar.gz: invalid compressed data--format violated
tar: Exiting with failure status due to previous errors
Failing this attempt. Failing the application.
at org.apache.tez.client.TezClient.waitTillReady(TezClient.java:699)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:218)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:116)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.startPool(TezSessionPoolManager.java:76)
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:488)
at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:87)
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:720)
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:593)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
I had exactly the same issue with HDP on AWS. FYI, In my case the issue was with HDP version 2.6.4.5-2. I'm going to show how I fixed using this version because it is the latest at this time.
As the error log shows the problem is with tez.tar.gz that is corrupted then YARN is unable to decompress it in the YARN container.
This tez.tar.gz file is copied from the hdfs:///hdp/apps/<hdp_version>/tez/tez.tar.gz.
To reproduce the error and confirm that this file is corrupted, you can run the following command:
sudo su
su hdfs
hdfs dfs -get /hdp/apps/2.6.4.5-2/tez.tar.gz
tar -xvzf tez.tar.gz
You will get the following error:
gzip: stdin: invalid compressed data--format violated
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
The fix is pretty simple, you must just replace the HDFS file with the one that you have on your local file-system running the following command:
hdfs dfs -rm /hdp/apps/2.6.4.5-2/tez/tez.tar.gz
hdfs dfs -put /usr/hdp/current/tez-client/lib/tez.tar.gz /hdp/apps/2.6.4.5-2/tez/tez.tar.gz
Now restart Hive Server 2 service and done!
NOTE: If something similar happens with other services you can do the same thing. Please check the following link that has more details: https://community.hortonworks.com/articles/30096/foxing-broken-targz-and-jar-files-in-hdp-24.html
Hope this helps!

Submitting Pig job from Google Cloud Dataproc does not add custom jars to Pig classpath

I'm trying to submit a Pig job via Google Cloud Dataproc and include a custom jar that implements a custom load function I use in the Pig script, but I can't find out how to do that.
Adding my custom jar through the UI appears DOES NOT add it to the Pig classpath.
Here's the output of the Pig job, showing it fails to find my class:
17/03/29 16:12:21 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
17/03/29 16:12:21 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
17/03/29 16:12:21 INFO pig.ExecTypeProvider: Picked MAPREDUCE as the ExecType
2017-03-29 16:12:21,961 [main] INFO org.apache.pig.Main - Apache Pig version 0.16.0 (r: unknown) compiled Nov 27 2016, 23:14:51
2017-03-29 16:12:21,961 [main] INFO org.apache.pig.Main - Logging error messages to: /tmp/cb3b0696-3f30-4db4-a6a7-bb716d2a8a89/pig_1490803941959.log
2017-03-29 16:12:22,379 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2017-03-29 16:12:22,379 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2017-03-29 16:12:22,379 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://aspen-dp-central-m
2017-03-29 16:12:22,404 [main] INFO com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase - GHFS version: 1.6.0-hadoop2
2017-03-29 16:12:22,890 [main] INFO org.apache.pig.PigServer - Pig Script ID for the session: PIG-default-e53a2851-efe5-4e74-bf33-89dfe0733386
2017-03-29 16:12:22,890 [main] WARN org.apache.pig.PigServer - ATS is disabled since yarn.timeline-service.enabled set to false
2017-03-29 16:12:23,247 [main] ERROR org.apache.pig.PigServer - exception during parsing: Error during parsing. Could not resolve com.turner.pig.load.HBaseMultiScanLoader using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
Failed to parse: Pig script failed to parse:
<line 8, column 13> pig script failed to validate: org.apache.pig.backend.executionengine.ExecException: ERROR 1070: Could not resolve com.turner.pig.load.HBaseMultiScanLoader using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:199)
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1819)
at org.apache.pig.PigServer$Graph.access$000(PigServer.java:1527)
at org.apache.pig.PigServer.parseAndBuild(PigServer.java:460)
at org.apache.pig.PigServer.executeBatch(PigServer.java:485)
at org.apache.pig.PigServer.executeBatch(PigServer.java:471)
at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:172)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:742)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:376)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:231)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:206)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
at org.apache.pig.Main.run(Main.java:532)
at org.apache.pig.Main.main(Main.java:176)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by:
<line 8, column 13> pig script failed to validate: org.apache.pig.backend.executionengine.ExecException: ERROR 1070: Could not resolve com.turner.pig.load.HBaseMultiScanLoader using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
at org.apache.pig.parser.LogicalPlanBuilder.validateFuncSpec(LogicalPlanBuilder.java:1339)
at org.apache.pig.parser.LogicalPlanBuilder.buildFuncSpec(LogicalPlanBuilder.java:1324)
at org.apache.pig.parser.LogicalPlanGenerator.func_clause(LogicalPlanGenerator.java:5184)
at org.apache.pig.parser.LogicalPlanGenerator.load_clause(LogicalPlanGenerator.java:3515)
at org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1625)
at org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:1102)
at org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:560)
at org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:421)
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:191)
... 19 more
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 1070: Could not resolve com.turner.pig.load.HBaseMultiScanLoader using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
at org.apache.pig.impl.PigContext.resolveClassName(PigContext.java:671)
at org.apache.pig.parser.LogicalPlanBuilder.validateFuncSpec(LogicalPlanBuilder.java:1336)
... 27 more
2017-03-29 16:12:23,251 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1070: Could not resolve com.turner.pig.load.HBaseMultiScanLoader using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
Details at logfile: /tmp/cb3b0696-3f30-4db4-a6a7-bb716d2a8a89/pig_1490803941959.log
2017-03-29 16:12:23,269 [main] INFO org.apache.pig.Main - Pig script completed in 1 second and 477 milliseconds (1477 ms)
Job output is complete
Registering the custom jar inside the Pig script solves the problem.
So, basically:
Added my jar file to Google Storage
Registered the jar inside the script
Submitted Pig job either via UI or command line below:
gcloud dataproc jobs submit pig --cluster eduboom-central --file custom.pig
--jars=gs://eduboom-dataproc/custom/eduboom.jar
custom.pig:
register eduboom.jar;
raw = LOAD 'hbase://eduboom_table'
USING com.eduboom.pig.load.HBaseMultiScanLoader('2017-03-30T14:00Z_00', '2017-03-30T14:01Z_25', 'cf:*')
AS (key:chararray, data);
DUMP raw;

Apache hadoop Installation on Windows 10

While setting up a single node cluster without Cygwin on windows 10,I followed the specific document- Link for Hadoop installation in windows 10
I am facing the below error while starting the hdfs using D:\hadoop-2.6.2.tar\hadoop-2.6.2\hadoop-2.6.2\sbin>start-dfs.cmd
Error message stack trace:
17/01/12 12:25:42 FATAL datanode.DataNode: Exception in secureMain java.lang.RuntimeException: Error while running command to get file permissions : ExitCodeException exitCode=-1073741515:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:808)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:791)
at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:582)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:557)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2299)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2341)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2323)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2215)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2262)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2438)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2462)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:620)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:557)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2299)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2341)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2323)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2215)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2262)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2438)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2462) 17/01/12 12:25:42 INFO util.ExitUtil: Exiting with status 1
Also this error message about starting namenode:
17/01/12 12:25:43 FATAL namenode.NameNode: Failed to start namenode.
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:557)
at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:490)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:309)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1022)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:741)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:538)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:597)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:764)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:748)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1441)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1507)
17/01/12 12:25:43 INFO util.ExitUtil: Exiting with status 1
[]Problem analysis ] /data directory permissions is not enough, the NameNode cannot be started.
[Solution]
(1) in the root, the operation of the/data/directory permissions assigned to hadoop users;
(2) empty /data directory file;
(3) to reformat the NameNode, restart the hadoop cluster.

Pig Hcatalog error

I am trying to run pig script in local mode on a single node cluster as given below.
hduser#ubuntu:~$ pig -x local -f "/home/hduser/ddsoft/pigscript/FirstUDF.pig"
But I am getting below error.
[main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 101: file
'/home/hduser/ddsoft/hive-0.13.1-bin/hcatalog/share/hcatalog/hcatalog-core-0.13.1.jar'
does not exist.
how do I register the jar file mentioned in the error message. I tried updating the .bashrc, but it didn’t fix the error.

Error starting PIG: ERROR 2998: Unhandled internal error. Found interface jline.Terminal, but class was expected

I downloaded Pig from apache, I have installed it, tried to run it using pig -x local
This is what I get:
15/12/10 15:06:26 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
15/12/10 15:06:26 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
15/12/10 15:06:26 INFO pig.ExecTypeProvider: Picked MAPREDUCE as the ExecType
2015-12-10 15:06:26,063 [main] INFO org.apache.pig.Main - Apache Pig version 0.15.0 (r1682971) compiled Jun 01 2015, 11:44:35
2015-12-10 15:06:26,063 [main] INFO org.apache.pig.Main - Logging error messages to: /usr/local/pig/pig_1449756386061.log
2015-12-10 15:06:26,097 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/ubuntu/.pigbootup not found
2015-12-10 15:06:26,132 [main] ERROR org.apache.pig.Main - ERROR 2998: Unhandled internal error. Found interface jline.Terminal, but class was expected
Details at logfile: /usr/local/pig/pig_1449756386061.log
2015-12-10 15:06:26,157 [main] INFO org.apache.pig.Main - Pig script completed in 206 milliseconds (206 ms)
My log file contains the following:
Error before Pig is launched
----------------------------
ERROR 2998: Unhandled internal error. Found interface jline.Terminal, but class was expected
java.lang.IncompatibleClassChangeError: Found interface jline.Terminal, but class was expected
at jline.ConsoleReader.<init>(ConsoleReader.java:174)
at jline.ConsoleReader.<init>(ConsoleReader.java:169)
at org.apache.pig.Main.run(Main.java:556)
at org.apache.pig.Main.main(Main.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
================================================================================
After I downloaded and extracted the package, I did the following (pig is in /usr/local/pig):
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_79
196 export PIG_PREFIX=/usr/local/pig
197 export PATH=$PATH:$PIG_PREFIX/bin
Any ideas what is wrong?
Thanks,
Serban
Add this -
export HADOOP_USER_CLASSPATH_FIRST=true
Refer
https://issues.apache.org/jira/browse/PIG-3851
hive startup -[ERROR] Terminal initialization failed; falling back to unsupported