Flink submit task failed - hadoop-yarn

I am using Flink1.6.1 and Hadoop2.7.5. on first I start a flink
bin/yarn-session.sh -n 2 -jm 1024 -tm 1024 -d
then submit a task
./bin/flink run ./examples/batch/WordCount.jar -input hdfs://CS-201:9000/LICENSE -output hdfs://CS-201:9000/wordcount-result.txt
I got a error:
[root#CS-201 flink-1.6.1]# ./bin/flink run
./examples/batch/WordCount.jar -input hdfs://CS-201:9000/LICENSE
-output hdfs://CS-201:9000/wordcount-result.txt 2019-05-19 15:31:11,357 INFO org.apache.flink.yarn.cli.FlinkYarnSessionCli
- Found Yarn properties file under /tmp/.yarn-properties-root. 2019-05-19 15:31:11,357 INFO
org.apache.flink.yarn.cli.FlinkYarnSessionCli - Found
Yarn properties file under /tmp/.yarn-properties-root. 2019-05-19
15:31:11,737 INFO org.apache.flink.yarn.cli.FlinkYarnSessionCli
- YARN properties set default parallelism to 2 2019-05-19 15:31:11,737 INFO org.apache.flink.yarn.cli.FlinkYarnSessionCli -
YARN properties set default parallelism to 2 YARN properties set
default parallelism to 2 2019-05-19 15:31:11,777 INFO
org.apache.hadoop.yarn.client.RMProxy -
Connecting to ResourceManager at CS-201/192.168.1.201:8032 2019-05-19
15:31:11,887 INFO org.apache.flink.yarn.cli.FlinkYarnSessionCli
- No path for the flink jar passed. Using the location of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
2019-05-19 15:31:11,887 INFO
org.apache.flink.yarn.cli.FlinkYarnSessionCli - No
path for the flink jar passed. Using the location of class
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
2019-05-19 15:31:11,891 WARN
org.apache.flink.yarn.AbstractYarnClusterDescriptor -
Neither the HADOOP_CONF_DIR nor the YARN_CONF_DIR environment variable
is set.The Flink YARN Client needs one of these to be set to properly
load the Hadoop configuration for accessing YARN. 2019-05-19
15:31:11,979 INFO org.apache.flink.yarn.AbstractYarnClusterDescriptor
- Found application JobManager host name 'cs-202' and port '52389' from supplied application id 'application_1558248666499_0003' Starting
execution of program
------------------------------------------------------------ The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: Could not
retrieve the execution result. (JobID:
471f0c2d047aba74ea621c5bfe782cbf) at
org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:260)
at
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:486)
at
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:474)
at
org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
at
org.apache.flink.examples.java.wordcount.WordCount.main(WordCount.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
at
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
at
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426)
at
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:804)
at
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)
at
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1044)
at
org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1120)
at java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at
org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1120)
Caused by: org.apache.flink.runtime.client.JobSubmissionException:
Failed to submit JobGraph. at
org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$8(RestClusterClient.java:379)
at
java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
at
java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
at
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at
org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$5(FutureUtils.java:213)
at
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at
java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
at
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
at
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) Caused by:
java.util.concurrent.CompletionException:
org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could
not complete the operation. Exception is not retryable. at
java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:326)
at
java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:338)
at
java.util.concurrent.CompletableFuture.uniRelay(CompletableFuture.java:911)
at
java.util.concurrent.CompletableFuture$UniRelay.tryFire(CompletableFuture.java:899)
... 12 more Caused by:
org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could
not complete the operation. Exception is not retryable. ... 10 more
Caused by: java.util.concurrent.CompletionException:
org.apache.flink.runtime.rest.util.RestClientException: [Job
submission failed.] at
java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:326)
at
java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:338)
at
java.util.concurrent.CompletableFuture.uniRelay(CompletableFuture.java:911)
at
java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:953)
at
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
... 4 more Caused by:
org.apache.flink.runtime.rest.util.RestClientException: [Job
submission failed.] at
org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:310)
at
org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:294)
at
java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:952)
... 5 more
why it happen? and How to fix that..

Related

Apache flink - Timeout after submitting job on hadoop / yarn cluster

I am trying to upgrade our job from flink 1.4.2 to 1.7.1 but I keep running into timeouts after submitting the job. The flink job runs on our hadoop cluster (version 2.7) with Yarn.
I've seen the following behavior:
Using the same flink-conf.yaml as we used in 1.4.2: 1.5.6 / 1.6.3 / 1.7.1 all versions timeout while 1.4.2 works.
Using 1.5.6 with "mode: legacy" (to switch off flip-6) works
Using 1.7.1 with "mode: legacy" gives timeout (I assume this option was removed but the documentation is outdated? https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#legacy)
When the timeout happens I get the following stacktrace:
INFO class java.time.Instant does not contain a getter for field seconds
INFO class com.bol.fin_hdp.cm1.domain.Cm1Transportable does not contain a getter for field globalId
INFO Submitting job 5af931bcef395a78b5af2b97e92dcffe (detached: false).
INFO ------------------------------------------------------------
INFO The program finished with the following exception:
INFO org.apache.flink.client.program.ProgramInvocationException: The main method caused an error.
INFO at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:545)
INFO at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:420)
INFO at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:404)
INFO at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:798)
INFO at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:289)
INFO at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)
INFO at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1035)
INFO at org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1111)
INFO at java.security.AccessController.doPrivileged(Native Method)
INFO at javax.security.auth.Subject.doAs(Subject.java:422)
INFO at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
INFO at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
INFO at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1111)
INFO Caused by: java.lang.RuntimeException: org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result.
INFO at com.bol.fin_hdp.job.starter.IntervalJobStarter.startJob(IntervalJobStarter.java:43)
INFO at com.bol.fin_hdp.job.starter.IntervalJobStarter.startJobWithConfig(IntervalJobStarter.java:32)
INFO at com.bol.fin_hdp.Main.main(Main.java:8)
INFO at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
INFO at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
INFO at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
INFO at java.lang.reflect.Method.invoke(Method.java:498)
INFO at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:528)
INFO ... 12 more
INFO Caused by: org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result.
INFO at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:258)
INFO at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:464)
INFO at org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:66)
INFO at com.bol.fin_hdp.cm1.job.Job.execute(Job.java:54)
INFO at com.bol.fin_hdp.job.starter.IntervalJobStarter.startJob(IntervalJobStarter.java:41)
INFO ... 19 more
INFO Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
INFO at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$8(RestClusterClient.java:371)
INFO at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
INFO at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
INFO at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
INFO at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
INFO at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$5(FutureUtils.java:216)
INFO at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
INFO at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
INFO at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
INFO at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
INFO at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$1(RestClient.java:301)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
INFO at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:214)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
INFO at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
INFO at java.lang.Thread.run(Thread.java:748)
INFO Caused by: org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could not complete the operation. Number of retries has been exhausted.
INFO at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$5(FutureUtils.java:213)
INFO ... 17 more
INFO Caused by: java.util.concurrent.CompletionException: org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: connection timed out: shd-hdp-b-slave-01...
INFO at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
INFO at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
INFO at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:943)
INFO at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
INFO ... 15 more
INFO Caused by: org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: connection timed out: shd-hdp-b-slave-017.example.com/some.ip.address:46500
INFO at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:212)
INFO ... 7 more
What changed in flip-6 that might cause this behavior and how can I fix this?
For our jobs on YARN w/Flink 1.6, we had to bump up the web.timeout setting via -yD web.timeout=100000.
In our case, there was a firewall between the machine submitting the job and our Hadoop cluster.
In newer Flink versions (1.7 and up) Flink uses REST to submit jobs. The port number for this REST service is random on yarn setups and could not be set.
Flink 1.8.0 introduced a config option to set this to a port or port range using:
rest.bind-port: 55520-55530

Failed to create InputInitializerManager error - TEZ on HIVE

I have installed Apache Tez 0.8.1, Hadoop version 2.7.0 and Hive version 2.01.I am able to successfully run the Map Reduce Jobs.But when I configure hive and tried to run a simple count query, it returned the below error.From the error it is trying to look for a jar,I have placed the jar in classpath but still error did not resolve.
Please help me in resolving this.Thanks in Advance!!.
hive> select count(*) from sample1;
Query ID = root_20160728215555_a58e91a6-8913-4a57-8715-bc1739a2cb02
Total jobs = 1
Launching Job 1 out of 1
----------------------------------------------------------------------------------------------
VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
----------------------------------------------------------------------------------------------
Map 1 container FAILED -1 0 0 -1 0 0
Reducer 2 container KILLED 1 0 0 1 0 0
----------------------------------------------------------------------------------------------
VERTICES: 00/02 [>>--------------------------] 0% ELAPSED TIME: 0.17 s
----------------------------------------------------------------------------------------------
Status: Failed
Vertex failed, vertexName=Map 1, vertexId=vertex_1469720608711_0011_1_00, diagnostics=[Vertex vertex_1469720608711_0011_1_00 [Map 1] killed/failed due to:INIT_FAILURE, Fail to create InputInitializerManager, org.apache.tez.dag.api.TezReflectionException: Unable to instantiate class with 1 arguments: org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator
at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:70)
at org.apache.tez.common.ReflectionUtils.createClazzInstance(ReflectionUtils.java:89)
at org.apache.tez.dag.app.dag.RootInputInitializerManager.createInitializer(RootInputInitializerManager.java:138)
at org.apache.tez.dag.app.dag.RootInputInitializerManager.runInputInitializers(RootInputInitializerManager.java:115)
at org.apache.tez.dag.app.dag.impl.VertexImpl.setupInputInitializerManager(VertexImpl.java:4676)
at org.apache.tez.dag.app.dag.impl.VertexImpl.access$4300(VertexImpl.java:204)
at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.handleInitEvent(VertexImpl.java:3445)
at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:3394)
at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:3375)
at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at org.apache.tez.state.StateMachineTez.doTransition(StateMachineTez.java:57)
at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:1975)
at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:203)
at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2090)
at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2076)
at org.apache.tez.common.AsyncDispatcher.dispatch(AsyncDispatcher.java:183)
at org.apache.tez.common.AsyncDispatcher$1.run(AsyncDispatcher.java:114)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:68)
... 20 more
Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/split/SplitLocationProvider
at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.<init>(HiveSplitGenerator.java:96)
... 25 more
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapred.split.SplitLocationProvider
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 26 more
]
Vertex killed, vertexName=Reducer 2, vertexId=vertex_1469720608711_0011_1_01, diagnostics=[Vertex received Kill in NEW state., Vertex vertex_1469720608711_0011_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]
DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
UPDATE:
After I face the above issue, I have copied the hadoop-core-1.2.1.jar in hive lib folder.After that I am facing another issue while starting hive.From the trace I could figure out that there is some illegal argument passed.
Exception in thread "main" java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1550)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3080)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3108)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:543)
at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:516)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:712)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:648)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1548)
... 14 more
Caused by: java.lang.IllegalArgumentException: Unrecognized Hadoop major version number: 1.2.1
at org.apache.hadoop.hive.shims.ShimLoader.getMajorVersion(ShimLoader.java:165)
at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:132)
at org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:93)
at org.apache.hadoop.hive.metastore.ObjectStore.getDataSourceProps(ObjectStore.java:376)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:268)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:58)
at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:517)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:482)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:544)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:370)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:219)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:67)
Issue is with the Apache Tez version.Apache Tez 0.8.1 is not compatible with Hadoop 2.7.0 and hive 2.0.1.
Have downloaded 0.8.4 src and built it which has resolved the issue.
Thanks!!.

Is there a sample Apache Phoenix + Spring Boot + Gradle sample project?

Without the Phoenix core, spring boot starts tomcat correctly.
Problem: Add
compile('org.apache.phoenix:phoenix-core:4.7.0-HBase-1.1') to the dependencies section of my build.gradle and Tomcat fails to start
dependencies {
compile('org.apache.phoenix:phoenix-core:4.7.0-HBase-1.1')
compile("org.springframework.data:spring-data-commons")
compile("org.springframework.boot:spring-boot-starter-jdbc:1.3.2.RELEASE")
compile('org.springframework.boot:spring-boot-starter-web:1.3.2.RELEASE')
compile("org.springframework:spring-test:4.2.4.RELEASE")
}
with the following exception,
org.apache.catalina.core.ContainerBase startInternal
SEVERE: A child container failed during start
java.util.concurrent.ExecutionException: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Tomcat].StandardHost[localhost].StandardContext[/phoenix]]
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:1123)
at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:816)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1575)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1565)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Tomcat].StandardHost[localhost].StandardContext[/phoenix]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
... 6 more
Caused by: java.lang.NoSuchMethodError: javax.servlet.ServletContext.addServlet(Ljava/lang/String;Ljavax/servlet/Servlet;)Ljavax/servlet/ServletRegistration$Dynamic;
at org.springframework.boot.context.embedded.ServletRegistrationBean.onStartup(ServletRegistrationBean.java:190)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.selfInitialize(EmbeddedWebApplicationContext.java:225)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.access$000(EmbeddedWebApplicationContext.java:85)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext$1.onStartup(EmbeddedWebApplicationContext.java:209)
at org.springframework.boot.contex2016-03-07 23:40:29.048 WARN 71400 --- [ main] ationConfigEmbeddedWebApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Unable to start embedded container; nested exception is org.springframework.boot.context.embedded.EmbeddedServletContainerException: Unable to start embedded Tomcat
ded.tomcat.TomcatStarter.onStartup(TomcatStarter.java:55)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5513)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
... 6 more
The following line added to the dependencies helped resolve the issue.
providedCompile "javax.servlet:javax.servlet-api:3.0.1"
dependencies {
providedCompile "javax.servlet:javax.servlet-api:3.0.1"
compile('org.apache.phoenix:phoenix-core:4.7.0-HBase-1.1')
compile("org.springframework.data:spring-data-commons")
compile("org.springframework.boot:spring-boot-starter-jdbc:1.3.2.RELEASE")
compile('org.springframework.boot:spring-boot-starter-web:1.3.2.RELEASE')
compile("org.springframework:spring-test:4.2.4.RELEASE")
}

Error starting PIG: ERROR 2998: Unhandled internal error. Found interface jline.Terminal, but class was expected

I downloaded Pig from apache, I have installed it, tried to run it using pig -x local
This is what I get:
15/12/10 15:06:26 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
15/12/10 15:06:26 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
15/12/10 15:06:26 INFO pig.ExecTypeProvider: Picked MAPREDUCE as the ExecType
2015-12-10 15:06:26,063 [main] INFO org.apache.pig.Main - Apache Pig version 0.15.0 (r1682971) compiled Jun 01 2015, 11:44:35
2015-12-10 15:06:26,063 [main] INFO org.apache.pig.Main - Logging error messages to: /usr/local/pig/pig_1449756386061.log
2015-12-10 15:06:26,097 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/ubuntu/.pigbootup not found
2015-12-10 15:06:26,132 [main] ERROR org.apache.pig.Main - ERROR 2998: Unhandled internal error. Found interface jline.Terminal, but class was expected
Details at logfile: /usr/local/pig/pig_1449756386061.log
2015-12-10 15:06:26,157 [main] INFO org.apache.pig.Main - Pig script completed in 206 milliseconds (206 ms)
My log file contains the following:
Error before Pig is launched
----------------------------
ERROR 2998: Unhandled internal error. Found interface jline.Terminal, but class was expected
java.lang.IncompatibleClassChangeError: Found interface jline.Terminal, but class was expected
at jline.ConsoleReader.<init>(ConsoleReader.java:174)
at jline.ConsoleReader.<init>(ConsoleReader.java:169)
at org.apache.pig.Main.run(Main.java:556)
at org.apache.pig.Main.main(Main.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
================================================================================
After I downloaded and extracted the package, I did the following (pig is in /usr/local/pig):
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_79
196 export PIG_PREFIX=/usr/local/pig
197 export PATH=$PATH:$PIG_PREFIX/bin
Any ideas what is wrong?
Thanks,
Serban
Add this -
export HADOOP_USER_CLASSPATH_FIRST=true
Refer
https://issues.apache.org/jira/browse/PIG-3851
hive startup -[ERROR] Terminal initialization failed; falling back to unsupported

Analyze SonarQube Eclipse - Caused by: org.apache.bcel.classfile.ClassFormatException: Invalid byte tag in constant pool: 15

I'm using Eclipse Luna (Build id: 20150109-0600, 64 bits) on Windows with SonarQube plugin (SonarQube Java Analyser 3.4.0.20140404-0949-RELEASE)
When I try to Run Analyze, the plugin can connect with the server and download the issues of the last analyze (the problems are shown on SonarQube Issues view), but after that there is this error:
java.lang.IllegalStateException: Error status [command: C:\Program Files\Java\jre8\bin\java.exe -cp D:\temp\sonar-runner-impl1326048247551966004.jar org.sonar.runner.impl.BatchLauncherMain D:\temp\sonar-project7307491695046280128.properties]: 1
at org.sonar.runner.api.ForkedRunner.fork(ForkedRunner.java:199)
at org.sonar.runner.api.ForkedRunner.doExecute(ForkedRunner.java:144)
at org.sonar.runner.api.Runner.execute(Runner.java:90)
at org.sonar.ide.eclipse.core.internal.jobs.AnalyseProjectJob.run(AnalyseProjectJob.java:343)
at org.sonar.ide.eclipse.core.internal.jobs.AnalyseProjectJob.run(AnalyseProjectJob.java:130)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54)
The strange thing is that in D:\temp there is no file sonar-runner-implXXX.jar or sonar-projectXXX.properties but there are files sonar-runner-batchXXX.jar (like sonar-runner-batch1316679692029245803.jar)
Another strange thing, the command line is using a JRE 8, but my JAVA_HOME is a JDK 7, the projet where I'm running the Analyze is configured to use a JDK 7 and all my "Installed Jres" on Eclipse are acctually JDKs.
Can someone help me?
========================================================================
Hi
1 - About the temp files, they are really created and then deleted (I made a test "monitoring" the temp file while running the Analyze)
2 - And I could even copy the files and run the command line by hand, in this case i could se the real stack trace:
C:\>c:\Progra~1\Java\jdk1.8.0\jre\bin\java.exe -cp D:\temp\sonar-runner-impl5987517469765765781.jar org.sonar.runner.impl.BatchLauncherMain D:\temp\sonar-project5904228863019510021.properties
INFO: SonarQube Server 4.3.2
14:01:54.841 INFO - Preview mode
14:01:54.851 INFO - Load batch settings
14:01:55.338 INFO - User cache: C:\Users\fred\.sonar\cache
14:01:55.353 INFO - Install plugins
14:01:55.391 INFO - Include plugins:
14:01:55.391 INFO - Exclude plugins: devcockpit, buildstability, pdfreport, report, buildbreaker, scmactivity, views, jira
Exception in thread "main" org.sonar.runner.impl.RunnerException: Unable to execute Sonar
at org.sonar.runner.impl.BatchLauncher$1.delegateExecution(BatchLauncher.java:91)
at org.sonar.runner.impl.BatchLauncher$1.run(BatchLauncher.java:75)
at java.security.AccessController.doPrivileged(Native Method)
at org.sonar.runner.impl.BatchLauncher.doExecute(BatchLauncher.java:69)
at org.sonar.runner.impl.BatchLauncher.execute(BatchLauncher.java:50)
at org.sonar.runner.impl.BatchLauncherMain.execute(BatchLauncherMain.java:41)
at org.sonar.runner.impl.BatchLauncherMain.main(BatchLauncherMain.java:59)
Caused by: org.sonar.api.utils.SonarException: You're not authorized to execute a dry run analysis. Please contact your SonarQube administrator.
at org.sonar.batch.bootstrap.ServerClient.handleHttpException(ServerClient.java:120)
at org.sonar.batch.bootstrap.ServerClient.download(ServerClient.java:71)
at org.sonar.batch.bootstrap.PreviewDatabase.downloadDatabase(PreviewDatabase.java:85)
at org.sonar.batch.bootstrap.PreviewDatabase.start(PreviewDatabase.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:110)
at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.start(ReflectionLifecycleStrategy.java:89)
at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.start(AbstractInjectionFactory.java:84)
at org.picocontainer.behaviors.AbstractBehavior.start(AbstractBehavior.java:169)
at org.picocontainer.behaviors.Stored$RealComponentLifecycle.start(Stored.java:132)
at org.picocontainer.behaviors.Stored.start(Stored.java:110)
at org.picocontainer.DefaultPicoContainer.potentiallyStartAdapter(DefaultPicoContainer.java:1015)
at org.picocontainer.DefaultPicoContainer.startAdapters(DefaultPicoContainer.java:1008)
at org.picocontainer.DefaultPicoContainer.start(DefaultPicoContainer.java:766)
at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:91)
at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77)
at org.sonar.batch.bootstrapper.Batch.startBatch(Batch.java:92)
at org.sonar.batch.bootstrapper.Batch.execute(Batch.java:74)
at org.sonar.runner.batch.IsolatedLauncher.execute(IsolatedLauncher.java:45)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.sonar.runner.impl.BatchLauncher$1.delegateExecution(BatchLauncher.java:87)
... 6 more
C:\>
3 - The real problem seems to be a lack of a permission...
But how can I give permission to "dry run analysis" to an user?
========================================================================
3 - I could solve the permission problem adding "Execute Preview Analysis" permission to the users.
4 - Now other error happened: Caused by: org.apache.bcel.classfile.ClassFormatException: Invalid byte tag in constant pool: 15
And i have no clue about it...
C:\>c:\Progra~1\Java\jdk1.8.0\jre\bin\java.exe -cp D:\temp\sonar-runner-impl5987517469765765781.jar org.sonar.runner.impl.BatchLauncherMain D:\temp\sonar-project5904228863019510021.properties
INFO: SonarQube Server 4.3.2
14:53:18.370 INFO - Preview mode
14:53:18.389 INFO - Load batch settings
14:53:18.794 INFO - User cache: C:\Users\fred\.sonar\cache
14:53:18.814 INFO - Install plugins
14:53:18.854 INFO - Include plugins:
14:53:18.854 INFO - Exclude plugins: devcockpit, buildstability, pdfreport, report, buildbreaker, scmactivity, views, jira
14:53:19.550 INFO - Create JDBC datasource for jdbc:h2:D:\workspaces\workspace-fred\.metadata\.plugins\org.eclipse.core.resources\.projects\sgl\org.sonar.ide.eclipse.core\.sonartmp\preview1422550399090-0
14:53:23.617 INFO - Initializing Hibernate
14:53:26.543 INFO - Load project settings
14:53:26.634 INFO - Apply project exclusions
14:53:26.862 INFO - ------------- Scan sgl
14:53:26.873 INFO - Load module settings
14:53:27.828 INFO - Loading technical debt model...
14:53:27.842 INFO - Loading technical debt model done: 14 ms
14:53:27.842 INFO - Loading rules...
14:53:28.111 INFO - Loading rules done: 269 ms
14:53:28.152 INFO - Configure Maven plugins
14:53:28.356 INFO - Compare to previous analysis (2015-01-29)
14:53:28.366 INFO - Compare over 30 days (2014-12-30, analysis of 2015-01-01 02:44:35.053)
14:53:28.376 INFO - Compare to previous version (2015-01-29)
14:53:28.376 INFO - No quality gate is configured.
14:53:28.485 INFO - Base dir: D:\workspaces\workspace-fred\sgl
14:53:28.485 INFO - Working dir: D:\workspaces\workspace-fred\.metadata\.plugins\org.eclipse.core.resources\.projects\sgl\org.sonar.ide.eclipse.core
14:53:28.485 INFO - Source dirs: D:\workspaces\workspace-fred\sgl\webApplication, D:\workspaces\workspace-fred\sgl\clientconf, D:\workspaces\workspace-fred\sgl\src\java, D:\workspaces\workspace-fred\sgl\src\conf, D:\workspaces\workspace-fred\sgl\conf
14:53:28.485 INFO - Test dirs: D:\workspaces\workspace-fred\sgl\src\test
14:53:28.485 INFO - Binary dirs: D:\workspaces\workspace-fred\sgl\webApplication\WEB-INF\classes
14:53:28.485 INFO - Source encoding: UTF-8, default locale: en_US
14:53:28.485 INFO - Index files
14:53:46.434 INFO - 959 files indexed
14:53:50.916 INFO - Quality profile for java: Minds Java Profile
14:53:51.088 INFO - Sensor JavaSquidSensor...
14:53:51.223 INFO - Java Main Files AST scan...
14:53:51.227 INFO - 946 source files to be analyzed
14:54:01.229 INFO - 127/946 files analyzed, current is D:\workspaces\workspace-fred\sgl\src\java\br\com\mindsatwork\sgl\model\DatabaseVersion.java
14:54:11.240 INFO - 176/946 files analyzed, current is D:\workspaces\workspace-fred\sgl\src\java\br\com\mindsatwork\sgl\model\MaterialReservationStatus.java
14:54:21.269 INFO - 240/946 files analyzed, current is D:\workspaces\workspace-fred\sgl\src\java\br\com\mindsatwork\sgl\model\strategy\AbstractTicketChangeRequestStatusStrategy.java
14:54:31.299 INFO - 321/946 files analyzed, current is D:\workspaces\workspace-fred\sgl\src\java\br\com\mindsatwork\sgl\permission\ApplicationModuleStrategyCourse.java
14:54:36.119 ERROR - Class not found: org.slf4j.Logger
14:54:41.320 INFO - 455/946 files analyzed, current is D:\workspaces\workspace-fred\sgl\src\java\br\com\mindsatwork\sgl\web\action\ActionInitSaveCorrectiveMaintenance.java
14:54:51.324 INFO - 541/946 files analyzed, current is D:\workspaces\workspace-fred\sgl\src\java\br\com\mindsatwork\sgl\web\action\ActionListProvider.java
14:55:01.370 INFO - 614/946 files analyzed, current is D:\workspaces\workspace-fred\sgl\src\java\br\com\mindsatwork\sgl\web\action\ActionSearch.java
14:55:11.395 INFO - 674/946 files analyzed, current is D:\workspaces\workspace-fred\sgl\src\java\br\com\mindsatwork\sgl\web\dwr\MessageManager.java
14:55:21.398 INFO - 748/946 files analyzed, current is D:\workspaces\workspace-fred\sgl\src\java\br\com\mindsatwork\sgl\web\service\model\xstream\WsPackageTrackingMainData.java
14:55:23.288 ERROR - Class not found: javax.el.ELContext
14:55:23.650 ERROR - Class not found: javax.el.ELContext
14:55:23.781 ERROR - Class not found: javax.el.ELContext
14:55:23.890 ERROR - Class not found: javax.el.ELContext
14:55:24.429 ERROR - Class not found: javax.el.ELContext
14:55:24.531 ERROR - Class not found: javax.el.ELContext
14:55:24.657 ERROR - Class not found: javax.el.ELContext
14:55:24.757 ERROR - Class not found: javax.el.ELContext
14:55:24.877 ERROR - Class not found: javax.el.ELContext
14:55:24.982 ERROR - Class not found: javax.el.ELContext
14:55:25.102 ERROR - Class not found: javax.el.ELContext
14:55:25.212 ERROR - Class not found: javax.el.ELContext
14:55:25.332 ERROR - Class not found: javax.el.ELContext
14:55:25.437 ERROR - Class not found: javax.el.ELContext
14:55:25.537 ERROR - Class not found: javax.el.ELContext
14:55:25.639 ERROR - Class not found: javax.el.ELContext
14:55:25.731 ERROR - Class not found: javax.el.ELContext
14:55:25.931 ERROR - Class not found: javax.el.ELContext
14:55:26.022 ERROR - Class not found: javax.el.ELContext
14:55:26.127 ERROR - Class not found: javax.el.ELContext
14:55:26.295 ERROR - Class not found: javax.el.ELContext
14:55:26.398 ERROR - Class not found: javax.el.ELContext
14:55:26.479 ERROR - Class not found: javax.el.ELContext
14:55:26.560 ERROR - Class not found: javax.el.ELContext
14:55:26.665 ERROR - Class not found: javax.el.ELContext
14:55:26.786 ERROR - Class not found: javax.el.ELContext
14:55:26.878 ERROR - Class not found: javax.el.ELContext
14:55:27.438 ERROR - Class not found: javax.el.ELContext
14:55:27.741 ERROR - Class not found: javax.el.ELContext
14:55:28.152 ERROR - Class not found: javax.el.ELContext
14:55:28.366 ERROR - Class not found: javax.el.ELContext
14:55:28.457 ERROR - Class not found: javax.el.ELContext
14:55:28.556 ERROR - Class not found: javax.el.ELContext
14:55:28.631 ERROR - Class not found: javax.el.ELContext
14:55:28.709 ERROR - Class not found: javax.el.ELContext
14:55:28.801 ERROR - Class not found: javax.el.ELContext
14:55:28.873 ERROR - Class not found: javax.el.ELContext
14:55:28.953 ERROR - Class not found: javax.el.ELContext
14:55:29.026 ERROR - Class not found: javax.el.ELContext
14:55:29.355 ERROR - Class not found: javax.el.ELContext
14:55:29.435 ERROR - Class not found: javax.el.ELContext
14:55:29.624 ERROR - Class not found: javax.el.ELContext
14:55:29.694 ERROR - Class not found: javax.el.ELContext
14:55:31.380 INFO - 848/946 files analyzed, current is D:\workspaces\workspace-fred\sgl\src\java\br\com\mindsatwork\sgl\web\taglib\display\export\ReportExcelHssfView.java
14:55:31.436 ERROR - Class not found: javax.el.ELContext
14:55:31.536 ERROR - Class not found: javax.el.ELContext
14:55:39.628 INFO - 946/946 source files analyzed
14:55:39.791 INFO - Java Main Files AST scan done: 108568 ms
14:55:39.880 INFO - Java bytecode scan...
14:55:41.734 INFO - Java bytecode scan done: 1854 ms
14:55:41.734 INFO - Java Test Files AST scan...
14:55:41.735 INFO - 13 source files to be analyzed
14:55:41.964 INFO - Java Test Files AST scan done: 230 ms
14:55:41.964 INFO - 13/13 source files analyzed
14:55:42.290 INFO - Package design analysis...
14:55:43.038 INFO - Package design analysis done: 748 ms
14:55:43.509 INFO - Sensor JavaSquidSensor done: 112421 ms
14:55:43.509 INFO - Sensor QProfileSensor...
14:55:43.512 INFO - Sensor QProfileSensor done: 3 ms
14:55:43.512 INFO - Sensor PmdSensor...
14:55:43.515 INFO - Execute PMD 5.1.1...
14:55:43.546 INFO - Java version: 1.7
14:55:43.593 INFO - PMD configuration: D:\workspaces\workspace-fred\.metadata\.plugins\org.eclipse.core.resources\.projects\sgl\org.sonar.ide.eclipse.core\pmd.xml
14:55:52.299 INFO - PMD configuration: D:\workspaces\workspace-fred\.metadata\.plugins\org.eclipse.core.resources\.projects\sgl\org.sonar.ide.eclipse.core\pmd-unit-tests.xml
14:55:52.300 INFO - Execute PMD 5.1.1 done: 8785 ms
14:55:52.328 INFO - Sensor PmdSensor done: 8816 ms
14:55:52.328 INFO - Sensor SurefireSensor...
14:55:52.330 INFO - parsing D:\workspaces\workspace-fred\.metadata\.plugins\org.eclipse.core.resources\.projects\sgl\org.sonar.ide.eclipse.core\build\surefire-reports
14:55:52.330 WARN - Reports path not found: D:\workspaces\workspace-fred\.metadata\.plugins\org.eclipse.core.resources\.projects\sgl\org.sonar.ide.eclipse.core\build\surefire-reports
14:55:52.330 INFO - Sensor SurefireSensor done: 2 ms
14:55:52.330 INFO - Sensor InitialOpenIssuesSensor...
14:55:52.373 INFO - Sensor InitialOpenIssuesSensor done: 43 ms
14:55:52.374 INFO - Sensor ProfileEventsSensor...
14:55:52.396 INFO - Sensor ProfileEventsSensor done: 22 ms
14:55:52.396 INFO - Sensor ProjectLinksSensor...
14:55:52.409 INFO - Sensor ProjectLinksSensor done: 13 ms
14:55:52.410 INFO - Sensor FindbugsSensor...
14:55:52.412 INFO - Execute Findbugs 2.0.3...
14:55:53.888 INFO - Findbugs output report: D:\workspaces\workspace-fred\.metadata\.plugins\org.eclipse.core.resources\.projects\sgl\org.sonar.ide.eclipse.core\findbugs-result.xml
Exception in thread "main" org.sonar.runner.impl.RunnerException: Unable to execute Sonar
at org.sonar.runner.impl.BatchLauncher$1.delegateExecution(BatchLauncher.java:91)
at org.sonar.runner.impl.BatchLauncher$1.run(BatchLauncher.java:75)
at java.security.AccessController.doPrivileged(Native Method)
at org.sonar.runner.impl.BatchLauncher.doExecute(BatchLauncher.java:69)
at org.sonar.runner.impl.BatchLauncher.execute(BatchLauncher.java:50)
at org.sonar.runner.impl.BatchLauncherMain.execute(BatchLauncherMain.java:41)
at org.sonar.runner.impl.BatchLauncherMain.main(BatchLauncherMain.java:59)
Caused by: org.sonar.api.utils.SonarException: Can not execute Findbugs
at org.sonar.plugins.findbugs.FindbugsExecutor.execute(FindbugsExecutor.java:154)
at org.sonar.plugins.findbugs.FindbugsSensor.analyse(FindbugsSensor.java:59)
at org.sonar.batch.phases.SensorsExecutor.executeSensor(SensorsExecutor.java:79)
at org.sonar.batch.phases.SensorsExecutor.execute(SensorsExecutor.java:70)
at org.sonar.batch.phases.PhaseExecutor.execute(PhaseExecutor.java:131)
at org.sonar.batch.scan.ModuleScanContainer.doAfterStart(ModuleScanContainer.java:178)
at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92)
at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77)
at org.sonar.batch.scan.ProjectScanContainer.scan(ProjectScanContainer.java:199)
at org.sonar.batch.scan.ProjectScanContainer.scanRecursively(ProjectScanContainer.java:194)
at org.sonar.batch.scan.ProjectScanContainer.doAfterStart(ProjectScanContainer.java:187)
at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92)
at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77)
at org.sonar.batch.scan.ScanTask.scan(ScanTask.java:56)
at org.sonar.batch.scan.ScanTask.execute(ScanTask.java:44)
at org.sonar.batch.bootstrap.TaskContainer.doAfterStart(TaskContainer.java:82)
at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92)
at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77)
at org.sonar.batch.bootstrap.BootstrapContainer.executeTask(BootstrapContainer.java:175)
at org.sonar.batch.bootstrap.BootstrapContainer.doAfterStart(BootstrapContainer.java:163)
at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92)
at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77)
at org.sonar.batch.bootstrapper.Batch.startBatch(Batch.java:92)
at org.sonar.batch.bootstrapper.Batch.execute(Batch.java:74)
at org.sonar.runner.batch.IsolatedLauncher.execute(IsolatedLauncher.java:45)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.sonar.runner.impl.BatchLauncher$1.delegateExecution(BatchLauncher.java:87)
... 6 more
Caused by: java.util.concurrent.ExecutionException: java.lang.ExceptionInInitializerError
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at org.sonar.plugins.findbugs.FindbugsExecutor.execute(FindbugsExecutor.java:146)
... 35 more
Caused by: java.lang.ExceptionInInitializerError
at edu.umd.cs.findbugs.detect.SerializableIdiom.visit(SerializableIdiom.java:609)
at edu.umd.cs.findbugs.visitclass.BetterVisitor.visitField(BetterVisitor.java:286)
at org.apache.bcel.classfile.Field.accept(Field.java:92)
at edu.umd.cs.findbugs.visitclass.PreorderVisitor.doVisitField(PreorderVisitor.java:266)
at edu.umd.cs.findbugs.visitclass.PreorderVisitor.visitJavaClass(PreorderVisitor.java:349)
at org.apache.bcel.classfile.JavaClass.accept(JavaClass.java:214)
at edu.umd.cs.findbugs.detect.SerializableIdiom.visitClassContext(SerializableIdiom.java:133)
at edu.umd.cs.findbugs.DetectorToDetector2Adapter.visitClass(DetectorToDetector2Adapter.java:74)
at edu.umd.cs.findbugs.FindBugs2.analyzeApplication(FindBugs2.java:1209)
at edu.umd.cs.findbugs.FindBugs2.execute(FindBugs2.java:282)
at org.sonar.plugins.findbugs.FindbugsExecutor$FindbugsTask.call(FindbugsExecutor.java:201)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.bcel.classfile.ClassFormatException: Invalid byte tag in constant pool: 15
at org.apache.bcel.classfile.Constant.readConstant(Constant.java:147)
at org.apache.bcel.classfile.ConstantPool.<init>(ConstantPool.java:68)
at org.apache.bcel.classfile.ClassParser.readConstantPool(ClassParser.java:237)
at org.apache.bcel.classfile.ClassParser.parse(ClassParser.java:143)
at org.apache.bcel.util.SyntheticRepository.loadClass(SyntheticRepository.java:179)
at org.apache.bcel.util.SyntheticRepository.loadClass(SyntheticRepository.java:127)
at edu.umd.cs.findbugs.ba.AnalysisContext.lookupSystemClass(AnalysisContext.java:501)
at edu.umd.cs.findbugs.DeepSubtypeAnalysis.<clinit>(DeepSubtypeAnalysis.java:39)
... 15 more
C:\>
========================================================================
Hi
Find a link that report the same error on tomcat and tells that it only happens with JDK 7.
So I change de JDK version used by the Analyze (the project is compiled with JDK 7 64 bits)
I tried:
JDK 6 32 bits: no error (but a lot of warnings "java.lang.UnsupportedClassVersionError" as project is compiled to java 7)
JDK 7 32 bits: no error
JDK 7 64 bits: no error
JDK 8 64 bits: Caused by: org.apache.bcel.classfile.ClassFormatException: Invalid byte tag in constant pool: 15
It seems that Sonar only support JDK 8 on recent versions, since 4.3.
But my Sonar server is 4.3.2, so it should support....
OBS: If I compile the project with JDK 8 and then do an Anylize I receive a different error:
...
Caused by: java.lang.ArrayIndexOutOfBoundsException: 7352
at org.objectweb.asm.ClassReader.readClass(Unknown Source)
at org.objectweb.asm.ClassReader.accept(Unknown Source)
at edu.umd.cs.findbugs.asm.FBClassReader.accept(FBClassReader.java:44)
at org.objectweb.asm.ClassReader.accept(Unknown Source)
at edu.umd.cs.findbugs.classfile.engine.ClassParserUsingASM.parse(ClassParserUsingASM.java:110)
at edu.umd.cs.findbugs.classfile.engine.ClassParserUsingASM.parse(ClassParserUsingASM.java:587)
at edu.umd.cs.findbugs.classfile.engine.ClassInfoAnalysisEngine.analyze(ClassInfoAnalysisEngine.java:76)
at edu.umd.cs.findbugs.classfile.engine.ClassInfoAnalysisEngine.analyze(ClassInfoAnalysisEngine.java:38)
at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.getClassAnalysis(AnalysisCache.java:268)
at edu.umd.cs.findbugs.ba.XFactory.getXClass(XFactory.java:652)
at edu.umd.cs.findbugs.ba.AnalysisContext.setAppClassList(AnalysisContext.java:932)
at edu.umd.cs.findbugs.FindBugs2.setAppClassList(FindBugs2.java:997)
at edu.umd.cs.findbugs.FindBugs2.execute(FindBugs2.java:225)
at org.sonar.plugins.findbugs.FindbugsExecutor$FindbugsTask.call(FindbugsExecutor.java:201)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I had the exactly the same problem.
The issue was that our sonar server was working with java 7, while the eclipse sonarQube plugin was working with java 8.
(You can check that from Help->About Eclipse -> Installation Details -> Configuration tab)
I managed to solve the issue by adding bellow to the eclipse.ini file (located under the eclipse installation library.
-vm
C:\Program Files (x86)\Java\jdk1.7.0_72\bin\javaw.exe
(this should come above -vmargs)