I get
Exception in thread "main" java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
at org.apache.hive.jdbc.HiveStatement.waitForOperationToComplete(HiveStatement.java:349)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:251)
at org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:434)
at Hive.main(Hive.java:21)
I have added hive-serde-2.1.1.jar from intellij. If I run select * from <table> it gives me result , but if I run select count(*) from <table> I get above error. Can anyone help me with this ?
UPDATE : I get this stack trace from yarn-gui
Application application_1499656878152_0003 failed 2 times due to Error launching appattempt_1499656878152_0003_000002. Got exception: org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateExceptionImpl(SerializedExceptionPBImpl.java:171)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:182)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:123)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:250)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
. Failing the application.
Related
I am trying to select a data using dbeaver with the following query:
Select * from tablename where datecol='';
Above query works fine with major dates, however for one date it is throwing below error:
SQL Error [2] [08S01]: Error while processing statement: FAILED:
Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed,
vertexName=Map 2, vertexId=vertex_1663507669262_19026_3_00,
diagnostics=[Task failed, taskId=task_1663507669262_19026_3_00_000009,
diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running
task ( failure ) :
attempt_1663507669262_19026_3_00_000009_0:java.lang.RuntimeException:
java.lang.RuntimeException: java.io.IOException:
com.google.protobuf.InvalidProtocolBufferException: Protocol message
tag had invalid wire type.
at
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296)
at
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)
at
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
at
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
at
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at
com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at
com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
at
com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
I'm a new user in RePast learning to run the mesoFON model. I get this error message. What is the problem?
I'm using Eclipse IDE 2018-09.
FATAL [Thread-5] 11:34:18,767 repast.simphony.ui.GUIScheduleRunner -
RunTimeException when running the schedule
Current tick (1.0)
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at repast.simphony.engine.schedule.DynamicTargetAction.execute(DynamicTargetAction.java:72)
at repast.simphony.engine.schedule.DefaultAction.execute(DefaultAction.java:38)
at repast.simphony.engine.schedule.ScheduleGroup.executeList(ScheduleGroup.java:205)
at repast.simphony.engine.schedule.ScheduleGroup.execute(ScheduleGroup.java:231)
at repast.simphony.engine.schedule.Schedule.execute(Schedule.java:352)
at repast.simphony.ui.GUIScheduleRunner$ScheduleLoopRunnable.run(GUIScheduleRunner.java:52)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.reflect.InvocationTargetException
at meso_FON.application.Environment$$FastClassByCGLIB$$fd509841.invoke(<generated>)
at net.sf.cglib.reflect.FastMethod.invoke(FastMethod.java:53)
at repast.simphony.engine.schedule.DynamicTargetAction.execute(DynamicTargetAction.java:69)
... 6 more
Caused by: java.lang.IllegalArgumentException: Comparison method violates its general contract!
at java.base/java.util.TimSort.mergeLo(TimSort.java:781)
at java.base/java.util.TimSort.mergeAt(TimSort.java:518)
at java.base/java.util.TimSort.mergeCollapse(TimSort.java:448)
at java.base/java.util.TimSort.sort(TimSort.java:245)
at java.base/java.util.Arrays.sort(Arrays.java:1515)
at java.base/java.util.ArrayList.sort(ArrayList.java:1749)
at java.base/java.util.Collections.sort(Collections.java:177)
at org.khelekore.prtree.MinMaxNodeGetter.<init>(MinMaxNodeGetter.java:29)
at org.khelekore.prtree.LeafBuilder.getMM(LeafBuilder.java:69)
at org.khelekore.prtree.LeafBuilder.buildLeafs(LeafBuilder.java:34)
at org.khelekore.prtree.PRTree.load(PRTree.java:65)
at meso_FON.application.Environment.getPRTree(Environment.java:423)
at meso_FON.application.Environment.queryPRTree(Environment.java:234)
... 9 more
It appears that there is an issue with a mesoFOM specific-method call. I'd suggest reaching out to the mesoFOM model developers directly to see if they can help.
I have hbase 1.2.3 cluster, and install hive 2.1.1. When I try to create external hbase table through beeline/hiveserver2, I got exception. But if I use hive cli, it is ok. The create statement is as following:
create external table hbase_xing(id int, name string)
stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
with serdeproperties ("hbase.columns.mapping" = ":key,f:name")
tblproperties("hbase.table.name" = "xing");
Exception is:
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Fri Jan 06 19:38:24 CST 2017, null, java.net.SocketTimeoutException: callTimeout=120000, callDuration=128483: row 'xing,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=c3.estarspace.com,16020,1483694415877, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:223)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:811)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:303)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:313)
at org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:205)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:742)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:735)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154)
at com.sun.proxy.$Proxy24.createTable(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:830)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:845)
at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3992)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:332)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1166)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242)
at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:334)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:347)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketTimeoutException: callTimeout=120000, callDuration=128483: row 'xing,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=c3.estarspace.com,16020,1483694415877, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
... 3 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to c3.estarspace.com/192.168.0.13:16020 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to c3.estarspace.com/192.168.0.13:16020 is closing. Call id=12, waitTime=3
at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1239)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1210)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:372)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:369)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:343)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 4 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to c3.estarspace.com/192.168.0.13:16020 is closing. Call id=12, waitTime=3
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1037)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:844)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:572)
) (state=08S01,code=1)
I put hbase.zookeeper.quorum into the hive-site.xml.
Track the source code, It looks like something wrong when HIVE to call HBASE Client to call the region server.
This problem seems only exist in HBase 1.2.3. Upgrade to HBase 1.2.4, HIVE function corrently.
I am trying to learn Apache Pig. I am sorry if its a lame question.
I have three columns sitename, upcount and downcount.
When I use describe res, I get: res: {sitename: chararray,upcount: int,downcount: int}
What I am trying to do is find site-up percentage of upcount by upcount/(upcount+downcount)
I am unable to figure how do I achieve it. I tried following:
res_sum = foreach res generate sitename, upcount+downcount;
But it gave following error:
Pig Stack Trace
---------------
ERROR 1066: Unable to open iterator for alias res_sum
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias res_sum
at org.apache.pig.PigServer.openIterator(PigServer.java:935)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:754)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:376)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
at org.apache.pig.Main.run(Main.java:565)
at org.apache.pig.Main.main(Main.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.io.IOException: Job terminated with anomalous status FAILED
at org.apache.pig.PigServer.openIterator(PigServer.java:927)
... 13 more
================================================================================
Try
res_sum = foreach res generate sitename, (upcount + downcount) as ud_sum;
if it doesn't work - show your entire script
I have hive query which run successful sometimes but maximum time gives an error "java.io.IOException: Couldn't create proxy provider class org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
Below is my error log
java.lang.RuntimeException: java.io.IOException: Couldn't create proxy
provider class org.apache.hadoop.hdfs.server.namenode.ha.Con\
figuredFailoverProxyProvider at
org.apache.hadoop.mapred.lib.CombineFileInputFormat.isSplitable(CombineFileInputFormat.java:154)
at
org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getMoreSplits(CombineFileInputFormat.java:283)
at
org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:239)
at
org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:75)
at
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:336)
at
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:302)
at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:435)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:525)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:517)
at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:399)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295) at
org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:415) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292) at
org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:564) at
org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:559) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:415) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:559)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:550)
at
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:420)
at
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153) at
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1516) at
org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1283) at
org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1101) at
org.apache.hadoop.hive.ql.Driver.run(Driver.java:924) at
org.apache.hadoop.hive.ql.Driver.run(Driver.java:914) at
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:269)
at
org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:221)
at
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:431)
at
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:367)
at
org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:464)
at
org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:474)
at
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:756)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:694) at
org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:633) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606) at
org.apache.hadoop.util.RunJar.main(RunJar.java:212) Caused by:
java.io.IOException: Couldn't create proxy provider class
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverPr\
oxyProvider at
org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:475)
at
org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:148)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:632) at
org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:570) at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:147)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367) at
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169) at
org.apache.hadoop.mapred.lib.CombineFileInputFormat.isSplitable(CombineFileInputFormat.java:151)
... 45 more Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedConstructorAccessor32.newInstance(Unknown
Source) at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:458)
... 53 more Caused by: java.lang.OutOfMemoryError: GC overhead limit
exceeded at java.util.Arrays.copyOf(Arrays.java:2219) at
java.util.ArrayList.grow(ArrayList.java:242) at
java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:216) at
java.util.ArrayList.ensureCapacityInternal(ArrayList.java:208) at
java.util.ArrayList.add(ArrayList.java:440) at
java.lang.String.split(String.java:2288) at
sun.net.util.IPAddressUtil.textToNumericFormatV4(IPAddressUtil.java:47)
at java.net.InetAddress.getAllByName(InetAddress.java:1129) at
java.net.InetAddress.getAllByName(InetAddress.java:1098) at
java.net.InetAddress.getByName(InetAddress.java:1048) at
org.apache.hadoop.security.SecurityUtil$StandardHostResolver.getByName(SecurityUtil.java:474)
at
org.apache.hadoop.security.SecurityUtil.getByName(SecurityUtil.java:461)
at
org.apache.hadoop.net.NetUtils.createSocketAddrForHost(NetUtils.java:235)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:215)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
at
org.apache.hadoop.hdfs.DFSUtil.getAddressesForNameserviceId(DFSUtil.java:677)
at
org.apache.hadoop.hdfs.DFSUtil.getAddressesForNsIds(DFSUtil.java:645)
at org.apache.hadoop.hdfs.DFSUtil.getAddresses(DFSUtil.java:628) at
org.apache.hadoop.hdfs.DFSUtil.getHaNnRpcAddresses(DFSUtil.java:727)
at
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.(ConfiguredFailoverProxyProvider.java:88)
at sun.reflect.GeneratedConstructorAccessor32.newInstance(Unknown
Source) at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:458)
at
org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:148)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:632) at
org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:570) at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:147)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367) at
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169) Job
Submission failed with exception
'java.lang.RuntimeException(java.io.IOException: Couldn't create proxy
provider class org.apac\
he.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider)'
Could anyone tell me why this happen?
I just stumbled across a similar exception myself, and increasing the hive client heap didn't help. I found I was able to clear up the OutOfMemory GC Overhead exception by adding a partition column to the where clause of the query, so I've concluded that having a very large number of splits is causing this exception. I haven't dug into the code, but I believe I've seen this happen with string concatenation in a loop triggering gc thrashing, and something similar might be happening in the CombineHiveInputFormat.getSplits method.