InvalidPathException when selecting imported API for apikit:config - anypoint-studio

I'm new here and I have to ask a question when my Google searches were to no avail.
I'm using Anypoint-Studio for Mulesoft and am running into a problem. As the title states, I'm getting an InvalidPathException when selecting imported API for apikit:config. The values for the api attribute is automatically populated when I select my API. Please see more details below.
flowfile.xml (apikit:config)
<apikit:config outboundHeadersMapName="outboundHeadersMapName" httpStatusVarName="httpStatus" doc:name="Router" doc:id="480358cd-654a-4b61-8331-8cc74d1f0704" name="Router" api="resource::6e2fcd5a-6e44-4ffd-9766-303b81ab3fa6:sample-api:1.0.3:raml:zip:orgname/sample/exp.api/sample-api.raml" >
Error
Caused by: org.mule.runtime.api.exception.MuleRuntimeException: org.mule.runtime.deployment.model.api.DeploymentInitException: InvalidPathException: Illegal char <:> at index 8: resource::6e2fcd5a-6e44-4ffd-9766-303b81ab3fa6:sample-api:1.0.3:raml:zip:orgname/sample/exp.api/sample-api.raml
Caused by: org.mule.runtime.deployment.model.api.DeploymentInitException: InvalidPathException: Illegal char <:> at index 8: resource::6e2fcd5a-6e44-4ffd-9766-303b81ab3fa6:sample-api:1.0.3:raml:zip:orgname/sample/exp.api/sample-api.raml
Caused by: org.mule.runtime.core.api.config.ConfigurationException: Illegal char <:> at index 8: resource::6e2fcd5a-6e44-4ffd-9766-303b81ab3fa6:sample-api:1.0.3:raml:zip:orgname/sample/exp.api/sample-api.raml
Caused by: org.mule.runtime.api.lifecycle.InitialisationException: Illegal char <:> at index 8: resource::6e2fcd5a-6e44-4ffd-9766-303b81ab3fa6:sample-api:1.0.3:raml:zip:orgname/sample/exp.api/sample-api.raml
Caused by: java.nio.file.InvalidPathException: Illegal char <:> at index 8: resource::6e2fcd5a-6e44-4ffd-9766-303b81ab3fa6:sample-api:1.0.3:raml:zip:orgname/sample/exp.api/sample-api.raml
at org.mule.module.apikit.Configuration.initialise(Configuration.java:110) ~[?:?]
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:52) ~[mule-core-4.2.2.jar:4.2.2]
at org.mule.runtime.core.api.util.func.CheckedConsumer.accept(CheckedConsumer.java:19) ~[mule-core-4.2.2.jar:4.2.2]
at org.mule.runtime.core.internal.lifecycle.phases.DefaultLifecyclePhase.applyLifecycle(DefaultLifecyclePhase.java:115) ~[mule-core-4.2.2.jar:4.2.2]
at org.mule.runtime.core.internal.lifecycle.phases.MuleContextInitialisePhase.applyLifecycle(MuleContextInitialisePhase.java:69) ~[mule-core-4.2.2.jar:4.2.2]
at org.mule.runtime.config.internal.SpringRegistryLifecycleManager$SpringContextInitialisePhase.applyLifecycle(SpringRegistryLifecycleManager.java:122) ~[mule-module-spring-config-4.2.2.jar:4.2.2]
at org.mule.runtime.core.internal.lifecycle.RegistryLifecycleCallback.applyLifecycle(RegistryLifecycleCallback.java:94) ~[mule-core-4.2.2.jar:4.2.2]
at org.mule.runtime.core.internal.lifecycle.RegistryLifecycleCallback.doApplyLifecycle(RegistryLifecycleCallback.java:87) ~[mule-core-4.2.2.jar:4.2.2]
at org.mule.runtime.core.internal.lifecycle.RegistryLifecycleCallback.doOnTransition(RegistryLifecycleCallback.java:68) ~[mule-core-4.2.2.jar:4.2.2]
at org.mule.runtime.core.internal.lifecycle.RegistryLifecycleCallback.lambda$onTransition$0(RegistryLifecycleCallback.java:51) ~[mule-core-4.2.2.jar:4.2.2]
at org.mule.runtime.core.api.util.func.CheckedRunnable.run(CheckedRunnable.java:21) ~[mule-core-4.2.2.jar:4.2.2]
.....
I kindly appreciate any insights on this. I know the error is with 'resource::' but if I change anything, it states that no .raml file could be found. I'm thinking it might be a framework bug. I've never seen anyone using an exported API as source for their .raml files in the examples on the internet. They always have the .raml files under src/main/resources/api.
Thank you very much!

Related

Spark 2.2 toPandas() is returning Py4JError

Currently refactoring some PySpark code and this specific snippet is causing me issues whereas it has ran fine before:
zip_table=spark.read.format("org.apache.spark.sql.execution.datasources.csv.CSVFileFormat").schema(schemas.zip_schema).load(path_to_file,header=True)
zip_pandas = zip_table.toPandas()
running into:
Py4JJavaError: An error occurred while calling o167.get.
: java.util.NoSuchElementException: spark.sql.execution.pandas.respectSessionTimeZone
at org.apache.spark.sql.internal.SQLConf$$anonfun$getConfString$2.apply(SQLConf.scala:1175)
at org.apache.spark.sql.internal.SQLConf$$anonfun$getConfString$2.apply(SQLConf.scala:1175)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.internal.SQLConf.getConfString(SQLConf.scala:1175)
at org.apache.spark.sql.RuntimeConfig.get(RuntimeConfig.scala:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
Other Spark Dataframes are successfully able to call toPandas() while others such as this return the same error.
The file in question is not particularly big and should be able to fit in the driver no problem. Also, I know it may be better to just load it into Pandas directly and not convert from Spark, but there is further logic in the code that requires both Spark/Pandas dataframes (and a later re-architecture of this work will address this issue).

WSO identity server , service provider creation with permissions does not work

I got following exception while creating new service provider with permissions , following is some portion of its code.
iManagementServiceStub = new IdentityApplicationManagementServiceStub();
iManagementServiceStub.createApplication(createApplication);
Following is exception i am getting on client side.
identity.IdentityApplicationManagementServiceIdentityApplicationManagementException: Error while storing permissions for application sp3
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at identity.IdentityApplicationManagementServiceStub.createApplication(IdentityApplicationManagementServiceStub.java:1007)
at identity.AddServiceProvider.main(AddServiceProvider.java:92)
Following are exception on server side.
Caused by: org.wso2.carbon.registry.core.exceptions.RegistryException: The path '/_system/governance/permission/applications/sp3/org.wso2.carbon.identity.application.common.model.ApplicationPermission#12809798' contains one or more illegal characters (~!##;%^*()+={}|\<>"',)
at org.wso2.carbon.registry.core.jdbc.Repository.put(Repository.java:262)
at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.put(EmbeddedRegistry.java:717)
at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.put(CacheBackedRegistry.java:591)
at org.wso2.carbon.registry.core.session.UserRegistry.putInternal(UserRegistry.java:828)
at org.wso2.carbon.registry.core.session.UserRegistry.putInternal(UserRegistry.java:796)
at org.wso2.carbon.registry.core.session.UserRegistry.access$900(UserRegistry.java:61)
at org.wso2.carbon.registry.core.session.UserRegistry$10.run(UserRegistry.java:786)
at org.wso2.carbon.registry.core.session.UserRegistry$10.run(UserRegistry.java:783)
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.carbon.registry.core.session.UserRegistry.put(UserRegistry.java:783)
at org.wso2.carbon.identity.application.mgt.ApplicationMgtUtil.storePermissions(ApplicationMgtUtil.java:299)
... 64 more
Please suggest.
When analyzing the error log, you can see that there are illegal charactors in your permission.
Caused by: org.wso2.carbon.registry.core.exceptions.RegistryException: The path '/_system/governance/permission/applications/sp3/org.wso2.carbon.identity.application.common.model.ApplicationPermission#12809798' contains one or more illegal characters (~!##;%^*()+={}|\<>"',)
Please check the permission name. Basically those charactors are reserved and have an specific use. So you cannot use them elsewhere whithout encoding them[1].
[1] https://en.wikipedia.org/wiki/Percent-encoding

Unable to produce data to hazelcast in apache camel

I have the following route configured in apache-camel
from("direct:hazelCast")
.setHeader(HazelcastConstants.OPERATION, constant(HazelcastConstants.PUT_OPERATION))
.toF("hazelcast:map:testHazel", HazelcastConstants.MAP_PREFIX);
But, when the above route is invoked I'm getting the following error:
java.lang.NullPointerException: Null key is not allowed!
at com.hazelcast.map.impl.proxy.MapProxyImpl.put(MapProxyImpl.java:95)
at com.hazelcast.map.impl.proxy.MapProxyImpl.put(MapProxyImpl.java:89)
at org.apache.camel.component.hazelcast.map.HazelcastMapProducer.put(HazelcastMapProducer.java:125)
at org.apache.camel.component.hazelcast.map.HazelcastMapProducer.process(HazelcastMapProducer.java:60)
at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:141)
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:460)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:121)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:83)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)
at org.apache.camel.component.direct.DirectProducer.process(DirectProducer.java:62)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:141)
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:460)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109)
at org.apache.camel.processor.MulticastProcessor.doProcessParallel(MulticastProcessor.java:814)
at org.apache.camel.processor.MulticastProcessor.access$200(MulticastProcessor.java:84)
at org.apache.camel.processor.MulticastProcessor$1.call(MulticastProcessor.java:314)
at org.apache.camel.processor.MulticastProcessor$1.call(MulticastProcessor.java:299)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
The code that I used was almost similar to what was there is camel docs http://camel.apache.org/hazelcast-component.html
The following is the code which produces the data to hazelcast
The following is the code snippet that I have used to produce the data to hazelcast in camel:
from("direct:hazelCast")
.setHeader(HazelcastConstants.OPERATION, constant(HazelcastConstants.PUT_OPERATION))
.setHeader(HazelcastConstants.OBJECT_ID, constant("SOME BLA BLA"))
.split()
.tokenizeXML(<SOMEValidTag>).streaming()
.unmarshal(jaxb)
.convertBodyTo(<Valid>.class)
.marshal().json(JsonLibrary.Jackson)
.toF("hazelcast:%stestHazel", HazelcastConstants.MAP_PREFIX);
Note: We need to convert the body to class which is should be serializeable
You need to set the objectid it is missing

Intellij Ultimate not executing jpa-ql query

I have Intellij-Idea Ultimate version 14.1.4, and I was trying to execute a JPA-QL query with the console.
I tried a simple one and I had an error I report below. Is this some misconfiguration on my end or a bug?
[2015-08-24 14:43:59] java.lang.IllegalArgumentException: Argument for #NotNull parameter 'stream' of com/intellij/openapi/util/io/FileUtil.loadTextAndClose must not be null
at com.intellij.openapi.util.io.FileUtil.loadTextAndClose(FileUtil.java)
at com.intellij.jpa.engine.JpaEngine.loadJpaTemplate(JpaEngine.java:241)
at com.intellij.jpa.engine.JpaEngine.createPersistenceXmlText(JpaEngine.java:151)
at com.intellij.jpa.engine.JpaEngine.createTemporaryJpaConfig(JpaEngine.java:207)
at com.intellij.jpa.engine.JpaEngine.access$000(JpaEngine.java:61)
at com.intellij.jpa.engine.JpaEngine$1.compute(JpaEngine.java:143)
at com.intellij.jpa.engine.JpaEngine$1.compute(JpaEngine.java:136)
at com.intellij.openapi.project.DumbService$1.run(DumbService.java:87)
at com.intellij.openapi.project.DumbService$2.compute(DumbService.java:123)
at com.intellij.openapi.project.DumbService$2.compute(DumbService.java:117)
at com.intellij.openapi.application.impl.ApplicationImpl.runReadAction(ApplicationImpl.java:888)
at com.intellij.openapi.project.DumbService.runReadActionInSmartMode(DumbService.java:117)
at com.intellij.openapi.project.DumbService.runReadActionInSmartMode(DumbService.java:84)
at com.intellij.jpa.engine.JpaEngine.createTemporaryConfig(JpaEngine.java:136)
at com.intellij.jpa.engine.JpaEngine.ensureInitialized(JpaEngine.java:97)
at com.intellij.jpa.engine.JpaEngine.createQuery(JpaEngine.java:104)
at com.intellij.jpa.engine.JpaEngineBase.executeQueryInner(JpaEngineBase.java:166)
at com.intellij.jpa.engine.JpaEngineBase.access$000(JpaEngineBase.java:56)
at com.intellij.jpa.engine.JpaEngineBase$1.compute(JpaEngineBase.java:128)
at com.intellij.jpa.engine.JpaEngineBase$1.compute(JpaEngineBase.java:123)
at com.intellij.database.console.AbstractEngine$4.compute(AbstractEngine.java:179)
at com.intellij.database.console.AbstractEngine$4.compute(AbstractEngine.java:174)
at com.intellij.database.console.AbstractEngine$3.run(AbstractEngine.java:159)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Hive error : java.io.IOException: Couldn't create proxy provider class org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

I have hive query which run successful sometimes but maximum time gives an error "java.io.IOException: Couldn't create proxy provider class org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
Below is my error log
java.lang.RuntimeException: java.io.IOException: Couldn't create proxy
provider class org.apache.hadoop.hdfs.server.namenode.ha.Con\
figuredFailoverProxyProvider at
org.apache.hadoop.mapred.lib.CombineFileInputFormat.isSplitable(CombineFileInputFormat.java:154)
at
org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getMoreSplits(CombineFileInputFormat.java:283)
at
org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:239)
at
org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:75)
at
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:336)
at
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:302)
at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:435)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:525)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:517)
at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:399)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295) at
org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:415) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292) at
org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:564) at
org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:559) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:415) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:559)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:550)
at
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:420)
at
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153) at
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1516) at
org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1283) at
org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1101) at
org.apache.hadoop.hive.ql.Driver.run(Driver.java:924) at
org.apache.hadoop.hive.ql.Driver.run(Driver.java:914) at
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:269)
at
org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:221)
at
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:431)
at
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:367)
at
org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:464)
at
org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:474)
at
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:756)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:694) at
org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:633) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606) at
org.apache.hadoop.util.RunJar.main(RunJar.java:212) Caused by:
java.io.IOException: Couldn't create proxy provider class
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverPr\
oxyProvider at
org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:475)
at
org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:148)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:632) at
org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:570) at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:147)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367) at
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169) at
org.apache.hadoop.mapred.lib.CombineFileInputFormat.isSplitable(CombineFileInputFormat.java:151)
... 45 more Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedConstructorAccessor32.newInstance(Unknown
Source) at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:458)
... 53 more Caused by: java.lang.OutOfMemoryError: GC overhead limit
exceeded at java.util.Arrays.copyOf(Arrays.java:2219) at
java.util.ArrayList.grow(ArrayList.java:242) at
java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:216) at
java.util.ArrayList.ensureCapacityInternal(ArrayList.java:208) at
java.util.ArrayList.add(ArrayList.java:440) at
java.lang.String.split(String.java:2288) at
sun.net.util.IPAddressUtil.textToNumericFormatV4(IPAddressUtil.java:47)
at java.net.InetAddress.getAllByName(InetAddress.java:1129) at
java.net.InetAddress.getAllByName(InetAddress.java:1098) at
java.net.InetAddress.getByName(InetAddress.java:1048) at
org.apache.hadoop.security.SecurityUtil$StandardHostResolver.getByName(SecurityUtil.java:474)
at
org.apache.hadoop.security.SecurityUtil.getByName(SecurityUtil.java:461)
at
org.apache.hadoop.net.NetUtils.createSocketAddrForHost(NetUtils.java:235)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:215)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
at
org.apache.hadoop.hdfs.DFSUtil.getAddressesForNameserviceId(DFSUtil.java:677)
at
org.apache.hadoop.hdfs.DFSUtil.getAddressesForNsIds(DFSUtil.java:645)
at org.apache.hadoop.hdfs.DFSUtil.getAddresses(DFSUtil.java:628) at
org.apache.hadoop.hdfs.DFSUtil.getHaNnRpcAddresses(DFSUtil.java:727)
at
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.(ConfiguredFailoverProxyProvider.java:88)
at sun.reflect.GeneratedConstructorAccessor32.newInstance(Unknown
Source) at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:458)
at
org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:148)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:632) at
org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:570) at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:147)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367) at
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169) Job
Submission failed with exception
'java.lang.RuntimeException(java.io.IOException: Couldn't create proxy
provider class org.apac\
he.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider)'
Could anyone tell me why this happen?
I just stumbled across a similar exception myself, and increasing the hive client heap didn't help. I found I was able to clear up the OutOfMemory GC Overhead exception by adding a partition column to the where clause of the query, so I've concluded that having a very large number of splits is causing this exception. I haven't dug into the code, but I believe I've seen this happen with string concatenation in a loop triggering gc thrashing, and something similar might be happening in the CombineHiveInputFormat.getSplits method.