I've recently updated IntelliJ to 2020.2 version. After that, In all of the projects I'm working with (java + maven), when I'm trying to build project (menu Build -> Build Project), I'm getting such log:
Executing pre-compile tasks...
Loading Ant Configuration...
Running Ant Tasks...
at index 8
Executing post-compile tasks...
Loading Ant Configuration...
Running Ant Tasks...
Synchronizing output directories...
30.07.2020 10:15 - Build completed with 1 error and 0 warnings in 942 ms
Build via maven works.
Did anyone faced the same or similar problem? Any workarounds or magic flag to set?
I have also found stacktrace from exception:
java.lang.NullPointerException: at index 8
java.util.concurrent.ExecutionException: java.lang.NullPointerException: at index 8
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
at com.intellij.compiler.server.BuildManager.takePreloadedProcess(BuildManager.java:676)
at com.intellij.compiler.server.BuildManager.lambda$scheduleBuild$11(BuildManager.java:709)
at com.intellij.util.concurrency.BoundedTaskExecutor.doRun(BoundedTaskExecutor.java:215)
at com.intellij.util.concurrency.BoundedTaskExecutor.access$200(BoundedTaskExecutor.java:26)
at com.intellij.util.concurrency.BoundedTaskExecutor$1.execute(BoundedTaskExecutor.java:194)
at com.intellij.util.ConcurrencyUtil.runUnderThreadName(ConcurrencyUtil.java:207)
at com.intellij.util.concurrency.BoundedTaskExecutor$1.run(BoundedTaskExecutor.java:183)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1$1.run(Executors.java:668)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1$1.run(Executors.java:665)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1.run(Executors.java:665)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.NullPointerException: at index 8
at com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:225)
at com.google.common.collect.ObjectArrays.checkElementsNotNull(ObjectArrays.java:215)
at com.google.common.collect.ObjectArrays.checkElementsNotNull(ObjectArrays.java:209)
at com.google.common.collect.ImmutableList.construct(ImmutableList.java:346)
at com.google.common.collect.ImmutableList.of(ImmutableList.java:185)
at org.jetbrains.android.compiler.AndroidBuildProcessParametersProvider.getClassPath(AndroidBuildProcessParametersProvider.java:23)
at com.intellij.compiler.server.impl.BuildProcessClasspathManager.getDynamicClasspath(BuildProcessClasspathManager.java:153)
at com.intellij.compiler.server.impl.BuildProcessClasspathManager.getBuildProcessPluginsClasspath(BuildProcessClasspathManager.java:40)
at com.intellij.compiler.server.BuildManager.launchBuildProcess(BuildManager.java:1248)
at com.intellij.compiler.server.BuildManager.lambda$launchPreloadedBuildProcess$18(BuildManager.java:993)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
... 12 more
2020-07-30 10:23:34,055 [1818712] INFO - ij.compiler.impl.CompileDriver - java.lang.NullPointerException: at index 8
at com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:225)
at com.google.common.collect.ObjectArrays.checkElementsNotNull(ObjectArrays.java:215)
at com.google.common.collect.ObjectArrays.checkElementsNotNull(ObjectArrays.java:209)
at com.google.common.collect.ImmutableList.construct(ImmutableList.java:346)
at com.google.common.collect.ImmutableList.of(ImmutableList.java:185)
at org.jetbrains.android.compiler.AndroidBuildProcessParametersProvider.getClassPath(AndroidBuildProcessParametersProvider.java:23)
at com.intellij.compiler.server.impl.BuildProcessClasspathManager.getDynamicClasspath(BuildProcessClasspathManager.java:153)
at com.intellij.compiler.server.impl.BuildProcessClasspathManager.getBuildProcessPluginsClasspath(BuildProcessClasspathManager.java:40)
at com.intellij.compiler.server.BuildManager.launchBuildProcess(BuildManager.java:1248)
at com.intellij.compiler.server.BuildManager.lambda$scheduleBuild$10(BuildManager.java:809)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at com.intellij.util.concurrency.BoundedTaskExecutor.doRun(BoundedTaskExecutor.java:215)
at com.intellij.util.concurrency.BoundedTaskExecutor.access$200(BoundedTaskExecutor.java:26)
at com.intellij.util.concurrency.BoundedTaskExecutor$1.execute(BoundedTaskExecutor.java:194)
at com.intellij.util.ConcurrencyUtil.runUnderThreadName(ConcurrencyUtil.java:207)
at com.intellij.util.concurrency.BoundedTaskExecutor$1.run(BoundedTaskExecutor.java:183)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1$1.run(Executors.java:668)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1$1.run(Executors.java:665)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1.run(Executors.java:665)
at java.base/java.lang.Thread.run(Thread.java:834)
Ia am newbie using Intellij. I do know why for my scenario. Any one know the reason?
My scenario:
- I build my web application. ( build war file, war exploded successfully)
case 1:
- I add my web application on tomcat instance in Intellij correctly. Add my war file example.war - start - failed
case 2:
- I copy the war file to external tomcat - start - successfully.
On Intellij:
- Debug view- Output:
NO ERROR
- idea.log
.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException: D:\03.sonat\02.cc\workforce\ProjectTemplate\WebTemplate\out\artifacts\WebTemplate_war_exploded\META-INF\context.xml (The system cannot find the file specified)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at org.jetbrains.idea.tomcat.TomcatUtil.loadXMLFile(TomcatUtil.java:171)
... 31 more
2017-02-24 15:45:26,140 [3049638] INFO - tbrains.idea.tomcat.TomcatUtil - Cannot load D:\03.sonat\02.cc\workforce\ProjectTemplate\WebTemplate\out\artifacts\WebTemplate_war_exploded\META-INF\context.xml: D:\03.sonat\02.cc\workforce\ProjectTemplate\WebTemplate\out\artifacts\WebTemplate_war_exploded\META-INF\context.xml (The system cannot find the file specified)
com.intellij.execution.ExecutionException: Cannot load D:\03.sonat\02.cc\workforce\ProjectTemplate\WebTemplate\out\artifacts\WebTemplate_war_exploded\META-INF\context.xml: D:\03.sonat\02.cc\workforce\ProjectTemplate\WebTemplate\out\artifacts\WebTemplate_war_exploded\META-INF\context.xml (The system cannot find the file specified)
at org.jetbrains.idea.tomcat.TomcatUtil.loadXMLFile(TomcatUtil.java:178)
at org.jetbrains.idea.tomcat.TomcatUtil.findContextInContextXml(TomcatUtil.java:98)
at org.jetbrains.idea.tomcat.TomcatUtil.findContextElement(TomcatUtil.java:352)
at org.jetbrains.idea.tomcat.admin.TomcatAdminLocalServerImpl$4.doPerform(TomcatAdminLocalServerImpl.java:128)
at org.jetbrains.idea.tomcat.admin.TomcatAdminLocalServerImpl$DeployStep.perform(TomcatAdminLocalServerImpl.java:289)
at org.jetbrains.idea.tomcat.admin.TomcatAdminLocalServerImpl.doDeploy(TomcatAdminLocalServerImpl.java:131)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase$4.doPerform(JavaeeJmxAdminServerBase.java:124)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase$JmxOperation.perform(JavaeeJmxAdminServerBase.java:247)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase.doStartDeploy(JavaeeJmxAdminServerBase.java:139)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase$2.setDeploymentStatus(JavaeeJmxAdminServerBase.java:94)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase$DeploymentModelOperation.doSetDeploymentStatus(JavaeeJmxAdminServerBase.java:274)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase$3.doPerform(JavaeeJmxAdminServerBase.java:104)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase$JmxOperation.perform(JavaeeJmxAdminServerBase.java:247)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase.doStartDeployWithUndeploy(JavaeeJmxAdminServerBase.java:111)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase.startDeploy(JavaeeJmxAdminServerBase.java:78)
at org.jetbrains.idea.tomcat.admin.TomcatAdminServerBase.startDeploy(TomcatAdminServerBase.java:119)
at org.jetbrains.idea.tomcat.admin.TomcatAdminLocalServerImpl.startDeploy(TomcatAdminLocalServerImpl.java:101)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.remoteServer.agent.impl.ThreadInvocationHandler.a(ThreadInvocationHandler.java:56)
at com.intellij.remoteServer.agent.impl.ThreadInvocationHandler.b(ThreadInvocationHandler.java:104)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at com.intellij.util.concurrency.BoundedTaskExecutor.runFirstTaskThenPollAndRunRest(BoundedTaskExecutor.java:178)
at com.intellij.util.concurrency.BoundedTaskExecutor.access$000(BoundedTaskExecutor.java:40)
at com.intellij.util.concurrency.BoundedTaskExecutor$2.run(BoundedTaskExecutor.java:197)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at .lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException: D:\ProjectTemplate\WebTemplate\out\artifacts\example_war_exploded\META-INF\context.xml (The system cannot find the file specified)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at org.jetbrains.idea.tomcat.TomcatUtil.loadXMLFile(TomcatUtil.java:171)
... 31 more
2017-02-24 15:45:26,140 [3049638] INFO - tbrains.idea.tomcat.TomcatUtil - Cannot load D:\ProjectTemplate\WebTemplate\out\artifacts\example_war_exploded\META-INF\context.xml: D:\ProjectTemplate\WebTemplate\out\artifacts\example_war_exploded\META-INF\context.xml (The system cannot find the file specified)
com.intellij.execution.ExecutionException: Cannot load D:\ProjectTemplate\WebTemplate\out\artifacts\example_war_exploded\META-INF\context.xml: D:\ProjectTemplate\WebTemplate\out\artifacts\example_war_exploded\META-INF\context.xml (The system cannot find the file specified)
at org.jetbrains.idea.tomcat.TomcatUtil.loadXMLFile(TomcatUtil.java:178)
at org.jetbrains.idea.tomcat.TomcatUtil.findContextInContextXml(TomcatUtil.java:98)
at org.jetbrains.idea.tomcat.TomcatUtil.findContextElement(TomcatUtil.java:352)
at org.jetbrains.idea.tomcat.admin.TomcatAdminLocalServerImpl$4.doPerform(TomcatAdminLocalServerImpl.java:128)
at org.jetbrains.idea.tomcat.admin.TomcatAdminLocalServerImpl$DeployStep.perform(TomcatAdminLocalServerImpl.java:289)
at org.jetbrains.idea.tomcat.admin.TomcatAdminLocalServerImpl.doDeploy(TomcatAdminLocalServerImpl.java:131)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase$4.doPerform(JavaeeJmxAdminServerBase.java:124)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase$JmxOperation.perform(JavaeeJmxAdminServerBase.java:247)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase.doStartDeploy(JavaeeJmxAdminServerBase.java:139)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase$2.setDeploymentStatus(JavaeeJmxAdminServerBase.java:94)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase$DeploymentModelOperation.doSetDeploymentStatus(JavaeeJmxAdminServerBase.java:274)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase$3.doPerform(JavaeeJmxAdminServerBase.java:104)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase$JmxOperation.perform(JavaeeJmxAdminServerBase.java:247)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase.doStartDeployWithUndeploy(JavaeeJmxAdminServerBase.java:111)
at com.intellij.javaee.oss.admin.jmx.JavaeeJmxAdminServerBase.startDeploy(JavaeeJmxAdminServerBase.java:78)
at org.jetbrains.idea.tomcat.admin.TomcatAdminServerBase.startDeploy(TomcatAdminServerBase.java:119)
at org.jetbrains.idea.tomcat.admin.TomcatAdminLocalServerImpl.startDeploy(TomcatAdminLocalServerImpl.java:101)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.remoteServer.agent.impl.ThreadInvocationHandler.a(ThreadInvocationHandler.java:56)
at com.intellij.remoteServer.agent.impl.ThreadInvocationHandler.b(ThreadInvocationHandler.java:104)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at com.intellij.util.concurrency.BoundedTaskExecutor.runFirstTaskThenPollAndRunRest(BoundedTaskExecutor.java:178)
at com.intellij.util.concurrency.BoundedTaskExecutor.access$000(BoundedTaskExecutor.java:40)
at com.intellij.util.concurrency.BoundedTaskExecutor$2.run(BoundedTaskExecutor.java:197)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException: D:\ProjectTemplate\WebTemplate\out\artifacts\example_war_exploded\META-INF\context.xml (The system cannot find the file specified)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at org.jetbrains.idea.tomcat.TomcatUtil.loadXMLFile(TomcatUtil.java:171)
... 30
What's happen?
In my case, I found out the way to start tomcat successfully in Intellij.
- Exit Intellij
- Open task manager
- Find the java process - End process
- Re-open Intellij
- Start tomcat server
Let's try the way for your the same case.
I am getting LeaseExpiredException in hadoop cluster -
tail -f /var/log/hadoop-hdfs/hadoop-hdfs-namenode-ip-172-30-2-148.log
2016-09-21 11:54:14,533 INFO BlockStateChange (IPC Server handler 10
on 8020): BLOCK* InvalidateBlocks: add blk_1073747501_6677 to
172.30.2.189:50010 2016-09-21 11:54:14,534 INFO org.apache.hadoop.ipc.Server (IPC Server handler 31 on 8020): IPC
Server handler 31 on 8020, call
org.apache.hadoop.hdfs.protocol.ClientProtocol.complete from
172.30.2.189:37674 Call#34 Retry#0: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease
on
/tmp/hive/hadoop/_tez_session_dir/1e4f71f0-9f29-468d-980e-9f19690bf849/.tez/application_1474442135017_0114/recovery/1/summary
(inode 26350): File does not exist. Holder
DFSClient_NONMAPREDUCE_-143782605_1 does not have any open files.
2016-09-21 11:54:15,557 INFO org.apache.hadoop.hdfs.StateChange (IPC
Server handler 0 on 8020): BLOCK* allocate
blk_1073747503_6679{UCState=UNDER_CONSTRUCTION, truncateBlock=null,
primaryNodeIndex=-1,
replicas=[ReplicaUC[[DISK]DS-86592ba7-c51a-431d-8019-9e362d721b28:NORMAL:172.30.2.189:50010|RBW]]} for
/var/log/hadoop-yarn/apps/hadoop/logs/application_1474442135017_0114/ip-172-30-2-122.us-west-2.compute.internal_8041.tmp
And, some of the hive query is also failing. I am guessing, it is because of above issue.
tail -f /var/log/hive/hive-server2.log
2016-09-21T11:59:35,126 INFO [HiveServer2-Background-Pool: Thread-3883([])]: ql.Driver (Driver.java:execute(1477)) - Executing command(queryId=hive_20160921115934_c56d9c91-640b-4f5d-b490-34549a4258c7):
INSERT INTO TABLE validation_logs
SELECT
"18364",
"TABLE_VALIDATION",
error.code,
error.validator,
get_json_object(key, '$.table_name'),
NULL,
NULL,
error.failure_msg,
FROM_UNIXTIME(UNIX_TIMESTAMP('20160921','yyyyMMdd')),
from_unixtime(unix_timestamp())
FROM
(SELECT
MAP(concat("{\"table_name\" : \"", table_name , "\"}"), error) AS err_map
FROM table_level_validation_result
) AS res
LATERAL VIEW EXPLODE(res.err_map) tmp AS key, error WHERE error IS NOT NULL AND (error.code="error" OR error.code="warn")
2016-09-21T11:59:35,126 INFO [HiveServer2-Background-Pool: Thread-3883([])]: ql.Driver (SessionState.java:printInfo(1054)) - Query ID = hive_20160921115934_c56d9c91-640b-4f5d-b490-34549a4258c7
2016-09-21T11:59:35,126 INFO [HiveServer2-Background-Pool: Thread-3883([])]: ql.Driver (SessionState.java:printInfo(1054)) - Total jobs = 1
2016-09-21T11:59:35,127 INFO [HiveServer2-Background-Pool: Thread-3883([])]: ql.Driver (SessionState.java:printInfo(1054)) - Launching Job 1 out of 1
2016-09-21T11:59:35,127 INFO [HiveServer2-Background-Pool: Thread-3883([])]: ql.Driver (Driver.java:launchTask(1856)) - Starting task [Stage-1:MAPRED] in serial mode
2016-09-21T11:59:35,127 INFO [HiveServer2-Background-Pool: Thread-3883([])]: tez.TezSessionPoolManager (TezSessionPoolManager.java:canWorkWithSameSession(404)) - The current user: hadoop, session user: hadoop
2016-09-21T11:59:35,127 INFO [HiveServer2-Background-Pool: Thread-3883([])]: tez.TezSessionPoolManager (TezSessionPoolManager.java:canWorkWithSameSession(421)) - Current queue name is null incoming queue name is null
2016-09-21T11:59:35,173 INFO [HiveServer2-Background-Pool: Thread-3883([])]: ql.Context (Context.java:getMRScratchDir(340)) - New scratch dir is hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/65cf7f02-a7d3-40ba-a93f-ff5214afbdfc/hive_2016-09-21_11-59-34_474_5003281239065359634-127
2016-09-21T11:59:35,174 INFO [HiveServer2-Background-Pool: Thread-3883([])]: exec.Task (TezTask.java:updateSession(279)) - Session is already open
2016-09-21T11:59:35,175 INFO [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142291 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/commons-vfs2-2.0.jar
2016-09-21T11:59:35,176 INFO [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142320 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/emr-ddb-hive.jar
2016-09-21T11:59:35,177 INFO [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142353 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/emr-hive-goodies.jar
2016-09-21T11:59:35,178 INFO [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142389 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/emr-kinesis-hive.jar
2016-09-21T11:59:35,178 INFO [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142423 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/hive-contrib-2.1.0-amzn-0.jar
2016-09-21T11:59:35,179 INFO [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142496 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/hive-plugins-0.0.1-emr-upgrade-20160919.070538-1.jar
2016-09-21T11:59:35,179 INFO [HiveServer2-Background-Pool: Thread-3883([])]: exec.Task (TezTask.java:build(321)) - Dag name: INSERT INTO TABLE valid...error.code="warn")(Stage-1)
2016-09-21T11:59:35,180 INFO [HiveServer2-Background-Pool: Thread-3883([])]: ql.Context (Context.java:getMRScratchDir(340)) - New scratch dir is hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/65cf7f02-a7d3-40ba-a93f-ff5214afbdfc/hive_2016-09-21_11-59-34_474_5003281239065359634-127
2016-09-21T11:59:35,223 INFO [HiveServer2-Background-Pool: Thread-3881([])]: impl.YarnClientImpl (YarnClientImpl.java:submitApplication(273)) - Submitted application application_1474442135017_0147
2016-09-21T11:59:35,224 INFO [HiveServer2-Background-Pool: Thread-3881([])]: client.TezClient (TezClient.java:start(477)) - The url to track the Tez Session: http://ip-172-30-2-148.us-west-2.compute.internal:20888/proxy/application_1474442135017_0147/
2016-09-21T11:59:35,391 INFO [HiveServer2-Background-Pool: Thread-3429([])]: SessionState (SessionState.java:printInfo(1054)) - Map 1: 0(+0,-4)/1
2016-09-21T11:59:35,446 ERROR [HiveServer2-Background-Pool: Thread-3429([])]: SessionState (SessionState.java:printError(1063)) - Status: Failed
2016-09-21T11:59:35,447 ERROR [HiveServer2-Background-Pool: Thread-3429([])]: SessionState (SessionState.java:printError(1063)) - Vertex failed, vertexName=Map 1, vertexId=vertex_1474442135017_0134_2_00, diagnostics=[Task failed, taskId=task_1474442135017_0134_2_00_000000, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : attempt_1474442135017_0134_2_00_000000_0:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:198)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:160)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)
at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:62)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:360)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:172)
... 14 more
Caused by: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:299)
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)
... 19 more
Caused by: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:818)
at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.open(S3NativeFileSystem.java:1193)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:771)
at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.open(EmrFileSystem.java:168)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:297)
... 20 more
], TaskAttempt 1 failed, info=[Error: Error while running task ( failure ) : attempt_1474442135017_0134_2_00_000000_1:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:198)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:160)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)
at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:62)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:360)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:172)
... 14 more
Caused by: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:299)
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)
... 19 more
Caused by: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:818)
at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.open(S3NativeFileSystem.java:1193)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:771)
at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.open(EmrFileSystem.java:168)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:297)
... 20 more
], TaskAttempt 2 failed, info=[Error: Error while running task ( failure ) : attempt_1474442135017_0134_2_00_000000_2:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:198)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:160)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)
at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:62)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:360)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:172)
... 14 more
Caused by: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:299)
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)
... 19 more
Caused by: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:818)
at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.open(S3NativeFileSystem.java:1193)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:771)
at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.open(EmrFileSystem.java:168)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:297)
... 20 more
], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : attempt_1474442135017_0134_2_00_000000_3:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:198)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:160)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)
at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:62)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:360)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:172)
... 14 more
Hive logs with DEBUG mode enabled -
Highlighted exceptions in green color.
As per my understanding, just before exception, it replaced file name to some other name, and all these happens in S3. Since, S3 is eventual consistent, thats why sometimes it shows this exception,and sometimes it worked file.
https://docs.google.com/document/d/1cwXVqQ3p-xPFcBqU9AuD7C8z8rHjhUIHwPjY-nVpFK0/edit?usp=sharing
Also set hive configuration properties before executing the query -
set hive.mapjoin.smalltable.filesize = 2000000000
set mapreduce.map.speculative = false
set mapreduce.output.fileoutputformat.compress = true
set hive.exec.compress.output = true
set mapreduce.task.timeout = 6000000
set hive.optimize.bucketmapjoin.sortedmerge = true
set io.compression.codecs = org.apache.hadoop.io.compress.GzipCode
set hive.auto.convert.sortmerge.join.noconditionaltask = false
set hive.optimize.bucketmapjoin = true
set hive.exec.compress.intermediate = true
set hive.enforce.bucketmapjoin = true
set mapred.output.compress = true
set mapreduce.map.output.compress = true
set hive.auto.convert.sortmerge.join = false
set hive.auto.convert.join = false
set mapreduce.reduce.speculative = false
set mapred.output.compression.codec = org.apache.hadoop.io.compress.GzipCodec
set hive.cache.expr.evaluation=false
set mapred.output.compress=true
set hive.exec.compress.output=true
set mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec
set io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec
set hive.exec.compress.intermediate=true
set mapreduce.map.output.compress=true
set hive.auto.convert.join=false
set mapreduce.map.speculative=false
set mapreduce.reduce.speculative=false
Cluster details -
one Data-node with 32 GB disk space.
Hive - 2.1.0, execution engine - tez 0.8.3
hadoop - 2.7.2
Questions-
Why it is throwing LeaseExpiredException ?
IS Hive query failure related to LeaseExpiredException ?
Is it because of wrong hive configuration properties ?
Update-1
As per this answer - LeaseExpiredException: No lease error on HDFS (Failed to close file),
I added
SET hive.exec.max.dynamic.partitions=100000;
SET hive.exec.max.dynamic.partitions.pernode=100000;
But then also showing the same exception.
I resolved the issue. Let me explain in detail.
Exceptions that is coming -
LeaveExpirtedException - from HDFS side.
FileNotFoundException - from Hive side (when Tez execution engine executes DAG)
Problem scenario-
We just upgraded the hive version from 0.13.0 to 2.1.0. And, everything was working fine with previous version. Zero runtime exception.
Different thoughts to resolve the issue -
First thought was, two threads was working on same piece because of NN intelligence. But as per below settings
set mapreduce.map.speculative=false
set mapreduce.reduce.speculative=false
that was not possible.
then, I increase the count from 1000 to 100000 for below settings -
SET hive.exec.max.dynamic.partitions=100000;
SET hive.exec.max.dynamic.partitions.pernode=100000;
that also didn't work.
Then the third thought was, definitely in a same process, what mapper-1 was created was deleted by another mapper/reducer. But, we didn't found any such logs in Hveserver2, Tez logs.
Finally the root cause lies in a application layer code itself. In hive-exec-2.1.0 version, they introduced new configuration property
"hive.exec.stagingdir":".hive-staging"
Description of above property -
Directory name that will be created inside table locations in order to
support HDFS encryption. This is replaces ${hive.exec.scratchdir} for
query results with the exception of read-only tables. In all cases
${hive.exec.scratchdir} is still used for other temporary files, such
as job plans.
So if there is any concurrent jobs in Application layer code (ETL), and are doing operation(rename/delete/move) on same table, then it may lead to this problem.
And, in our case, 2 concurrent jobs are doing "INSERT OVERWRITE" on same table, that leads to delete metadata file of 1 mapper, that is causing this issue.
Resolution -
Move the metadata file location to outside table(table lies in S3).
Disable HDFS encryption (as mentioned in Description of stagingdir property.)
Change into your Application layer code to avoid concurrency issue.
Related question - Why hive_staging file is missing in AWS EMR
I am getting the following error when running a maven build. Seems to not be able to instrument classes. Anyone has an idea what should be the cause?
Thanks.
Here is build output:
[ERROR] Failed to execute goal org.javalite:activejdbc-instrumentation:1.4.11:instrument (default) on project xtm2rest: Failed to add output directory to classpath: org.javalite.instrumentation.InstrumentationException: javassist.NotFoundException: modelClass(..) is not found in org.javalite.activejdbc.Model -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.javalite:activejdbc-instrumentation:1.4.11:instrument (default) on project xtm2rest: Failed to add output directory to classpath
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:347)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:154)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:584)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:213)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:157)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoExecutionException: Failed to add output directory to classpath
at org.javalite.instrumentation.ActiveJdbcInstrumentationPlugin.execute(ActiveJdbcInstrumentationPlugin.java:88)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
... 19 more
Caused by: java.lang.RuntimeException: org.javalite.instrumentation.InstrumentationException: javassist.NotFoundException: modelClass(..) is not found in org.javalite.activejdbc.Model
at org.javalite.instrumentation.Instrumentation.instrument(Instrumentation.java:70)
at org.javalite.instrumentation.ActiveJdbcInstrumentationPlugin.instrument(ActiveJdbcInstrumentationPlugin.java:124)
at org.javalite.instrumentation.ActiveJdbcInstrumentationPlugin.execute(ActiveJdbcInstrumentationPlugin.java:82)
... 21 more
Caused by: org.javalite.instrumentation.InstrumentationException: javassist.NotFoundException: modelClass(..) is not found in org.javalite.activejdbc.Model
at org.javalite.instrumentation.ModelInstrumentation.instrument(ModelInstrumentation.java:43)
at org.javalite.instrumentation.Instrumentation.instrument(Instrumentation.java:57)
... 23 more
Caused by: javassist.NotFoundException: modelClass(..) is not found in org.javalite.activejdbc.Model
at javassist.CtClassType.getDeclaredMethod(CtClassType.java:1210)
at org.javalite.instrumentation.ModelInstrumentation.doInstrument(ModelInstrumentation.java:51)
at org.javalite.instrumentation.ModelInstrumentation.instrument(ModelInstrumentation.java:40)
... 24 more
Since you are not adding any more information, I can only speculate that your version of activejdbc-instrumentation plugin and activejdbc are different. Check that both are on the same version.
I met with some problem with using a string parameter in the SQL over a JNDI datasource. In fact, I need to build this query:
select name, some_field1, some_field2
from my_table
where name in (${param_name_filter})
But it doesn't work.
I tried to change my query (just for testing) and removed a parameter from the query:
select name, some_field1, some_field2
from my_table
where name in ('name 1', 'name 2')
But it doesn't work either. As a result, I get the message "no data found".
Where did I make a mistake?
The Pentaho log file is below, but all these messages I get when Pentaho starts (not when dashboar is working).
2013-11-07 13:53:39,651 WARN [org.pentaho.hadoop.shim.HadoopConfigurationLocator] Unable to load Hadoop Configuration from "file:///usr/share/pentaho/biserver-ce/pentaho-solutions/system/kettle/plugins/pentaho-big-data-plugin/hadoop-configurations/mapr". For more information enable debug logging.
2013-11-07 13:53:42,827 WARN [org.pentaho.reporting.libraries.base.boot.PackageManager] Unresolved dependency for package: org.pentaho.reporting.engine.classic.extensions.datasources.cda.CdaModule
2013-11-07 13:53:42,848 WARN [org.pentaho.reporting.libraries.base.boot.PackageSorter] A dependent module was not found in the list of known modules.
2013-11-07 13:53:46,071 ERROR [org.pentaho.platform.util.logging.Logger] misc-org.pentaho.platform.engine.services.connection.datasource.dbcp.PooledDatasourceSystemListener: PooledDatasourceSystemListener.ERROR_0003 - Unable to pool datasource object: MySqlConnection caused by java.sql.SQLException: Access denied for user 'root'#'localhost' (using password: NO)
2013-11-07 13:53:46,719 ERROR [org.pentaho.platform.util.logging.Logger] Error: Pentaho
2013-11-07 13:53:46,720 ERROR [org.pentaho.platform.util.logging.Logger] misc-class org.pentaho.platform.plugin.services.pluginmgr.DefaultPluginManager: PluginManager.ERROR_0011 - Failed to register plugin cdc
org.pentaho.platform.api.engine.PlatformPluginRegistrationException: PluginManager.ERROR_0017 - Could not load lifecycle listener [pt.webdetails.cdc.plugin.CdcLifeCycleListener] for plugin cdc
at org.pentaho.platform.plugin.services.pluginmgr.DefaultPluginManager.bootStrapPlugin(DefaultPluginManager.java:163)
at org.pentaho.platform.plugin.services.pluginmgr.DefaultPluginManager.registerPlugin(DefaultPluginManager.java:197)
at org.pentaho.platform.plugin.services.pluginmgr.DefaultPluginManager.reload(DefaultPluginManager.java:128)
at org.pentaho.platform.plugin.services.pluginmgr.PluginAdapter.startup(PluginAdapter.java:42)
at org.pentaho.platform.engine.core.system.PentahoSystem.notifySystemListenersOfStartup(PentahoSystem.java:342)
at org.pentaho.platform.engine.core.system.PentahoSystem.notifySystemListenersOfStartup(PentahoSystem.java:324)
at org.pentaho.platform.engine.core.system.PentahoSystem.init(PentahoSystem.java:291)
at org.pentaho.platform.engine.core.system.PentahoSystem.init(PentahoSystem.java:208)
at org.pentaho.platform.web.http.context.SolutionContextListener.contextInitialized(SolutionContextListener.java:137)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4135)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4630)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:637)
at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:563)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:498)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1277)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:321)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:785)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:445)
at org.apache.catalina.core.StandardService.start(StandardService.java:519)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
at org.apache.catalina.startup.Catalina.start(Catalina.java:581)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
Caused by: java.lang.NoClassDefFoundError: com/hazelcast/core/MembershipListener
at pt.webdetails.cdc.plugin.CdcLifeCycleListener.<clinit>(CdcLifeCycleListener.java:30)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:374)
at org.pentaho.platform.plugin.services.pluginmgr.DefaultPluginManager.bootStrapPlugin(DefaultPluginManager.java:160)
... 32 more
Caused by: java.lang.ClassNotFoundException: com.hazelcast.core.MembershipListener
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at org.pentaho.platform.plugin.services.pluginmgr.PluginClassLoader.loadClass(PluginClassLoader.java:214)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 39 more
2013-11-07 13:53:46,723 ERROR [org.pentaho.platform.util.logging.Logger] Error end:
2013-11-07 13:53:47,043 WARN [pt.webdetails.cpf.persistence.PersistenceEngine] Falling back to built-in config
2013-11-07 13:53:51,917 WARN [pt.webdetails.cpf.persistence.PersistenceEngine] Falling back to built-in config
2013-11-07 13:53:57,529 ERROR [org.pentaho.platform.util.logging.Logger] Error: Pentaho
2013-11-07 13:53:57,529 ERROR [org.pentaho.platform.util.logging.Logger] misc-class org.pentaho.platform.plugin.services.pluginmgr.DefaultPluginManager: PluginManager.ERROR_0011 - Failed to register plugin cdb
com.orientechnologies.common.io.OIOException: Cannot connect to any configured remote nodes: 127.0.0.1:2424
at com.orientechnologies.orient.client.remote.OStorageRemote.createNetworkConnection(OStorageRemote.java:1686)
at com.orientechnologies.orient.client.remote.OStorageRemote.createConnectionPool(OStorageRemote.java:1953)
at com.orientechnologies.orient.client.remote.OStorageRemote.openRemoteDatabase(OStorageRemote.java:1517)
at com.orientechnologies.orient.client.remote.OStorageRemote.open(OStorageRemote.java:176)
at com.orientechnologies.orient.client.remote.OStorageRemoteThread.open(OStorageRemoteThread.java:69)
at com.orientechnologies.orient.core.db.raw.ODatabaseRaw.open(ODatabaseRaw.java:96)
at com.orientechnologies.orient.core.db.ODatabaseWrapperAbstract.open(ODatabaseWrapperAbstract.java:47)
at com.orientechnologies.orient.core.db.record.ODatabaseRecordAbstract.open(ODatabaseRecordAbstract.java:114)
at com.orientechnologies.orient.core.db.ODatabaseWrapperAbstract.open(ODatabaseWrapperAbstract.java:47)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTxPooled.<init>(ODatabaseDocumentTxPooled.java:43)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentPool.createResource(ODatabaseDocumentPool.java:39)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentPool.createResource(ODatabaseDocumentPool.java:20)
at com.orientechnologies.orient.core.db.ODatabasePoolBase$1.createNewResource(ODatabasePoolBase.java:67)
at com.orientechnologies.orient.core.db.ODatabasePoolBase$1.createNewResource(ODatabasePoolBase.java:56)
at com.orientechnologies.common.concur.resource.OResourcePool.getResource(OResourcePool.java:66)
at com.orientechnologies.orient.core.db.ODatabasePoolAbstract.acquire(ODatabasePoolAbstract.java:65)
at com.orientechnologies.orient.core.db.ODatabasePoolAbstract.acquire(ODatabasePoolAbstract.java:53)
at com.orientechnologies.orient.core.db.ODatabasePoolBase.acquire(ODatabasePoolBase.java:114)
at pt.webdetails.cpf.persistence.PersistenceEngine.getConnection(PersistenceEngine.java:137)
at pt.webdetails.cpf.persistence.PersistenceEngine.initializeClass(PersistenceEngine.java:418)
at pt.webdetails.cdb.CdbLifeCycleListener.init(CdbLifeCycleListener.java:45)
at org.pentaho.platform.plugin.services.pluginmgr.PlatformPlugin.init(PlatformPlugin.java:189)
at org.pentaho.platform.plugin.services.pluginmgr.DefaultPluginManager.registerPlugin(DefaultPluginManager.java:199)
at org.pentaho.platform.plugin.services.pluginmgr.DefaultPluginManager.reload(DefaultPluginManager.java:128)
at org.pentaho.platform.plugin.services.pluginmgr.PluginAdapter.startup(PluginAdapter.java:42)
at org.pentaho.platform.engine.core.system.PentahoSystem.notifySystemListenersOfStartup(PentahoSystem.java:342)
at org.pentaho.platform.engine.core.system.PentahoSystem.notifySystemListenersOfStartup(PentahoSystem.java:324)
at org.pentaho.platform.engine.core.system.PentahoSystem.init(PentahoSystem.java:291)
at org.pentaho.platform.engine.core.system.PentahoSystem.init(PentahoSystem.java:208)
at org.pentaho.platform.web.http.context.SolutionContextListener.contextInitialized(SolutionContextListener.java:137)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4135)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4630)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:637)
at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:563)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:498)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1277)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:321)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:785)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:445)
at org.apache.catalina.core.StandardService.start(StandardService.java:519)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
at org.apache.catalina.startup.Catalina.start(Catalina.java:581)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
2013-11-07 13:53:57,533 ERROR [org.pentaho.platform.util.logging.Logger] Error end:
2013-11-07 13:53:58,321 WARN [pt.webdetails.cpf.persistence.PersistenceEngine] Falling back to built-in config
2013-11-07 13:53:58,340 ERROR [org.pentaho.platform.engine.services.solution.SolutionEngine] 856fca78-4792-11e3-a347-f71d7a81a07e:SOLUTION-ENGINE:scheduler.xaction: Action Sequence execution failed, see details below
| Error Time: 7 ?????? 2013 ?. 13:53:58 MSK
| Session ID: scheduler.xaction
| Instance Id: 856fca78-4792-11e3-a347-f71d7a81a07e
| Action Sequence: scheduler.xaction
| Execution Stack:
EXECUTING ACTION: Scheduler (org.pentaho.platform.engine.services.solution.PojoComponent)
| Action Class: org.pentaho.platform.engine.services.solution.PojoComponent
| Action Desc: Scheduler
| Loop Index (1-based): 0
Stack Trace:org.pentaho.platform.api.engine.ActionExecutionException: RuntimeContext.ERROR_0017 - Action failed to execute
at org.pentaho.platform.engine.services.runtime.RuntimeContext.executeComponent(RuntimeContext.java:1325)
at org.pentaho.platform.engine.services.runtime.RuntimeContext.executeAction(RuntimeContext.java:1262)
at org.pentaho.platform.engine.services.runtime.RuntimeContext.performActions(RuntimeContext.java:1161)
at org.pentaho.platform.engine.services.runtime.RuntimeContext.executeLoop(RuntimeContext.java:1105)
at org.pentaho.platform.engine.services.runtime.RuntimeContext.executeSequence(RuntimeContext.java:987)
at org.pentaho.platform.engine.services.runtime.RuntimeContext.executeSequence(RuntimeContext.java:897)
at org.pentaho.platform.engine.services.solution.SolutionEngine.executeInternal(SolutionEngine.java:399)
at org.pentaho.platform.engine.services.solution.SolutionEngine.execute(SolutionEngine.java:317)
at org.pentaho.platform.engine.services.solution.SolutionEngine.execute(SolutionEngine.java:193)
at org.pentaho.platform.engine.services.BaseRequestHandler.handleActionRequest(BaseRequestHandler.java:159)
at org.pentaho.platform.scheduler.QuartzExecute.execute(QuartzExecute.java:198)
at org.quartz.core.JobRunShell.run(JobRunShell.java:203)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:520)
2013-11-07 13:54:04,286 ERROR [org.pentaho.platform.util.logging.Logger] Error: Pentaho
2013-11-07 13:54:04,286 ERROR [org.pentaho.platform.util.logging.Logger] misc-class org.pentaho.platform.plugin.services.pluginmgr.DefaultPluginManager: PluginManager.ERROR_0011 - Failed to register plugin cdv
com.orientechnologies.common.io.OIOException: Cannot connect to any configured remote nodes: 127.0.0.1:2424
at com.orientechnologies.orient.client.remote.OStorageRemote.createNetworkConnection(OStorageRemote.java:1686)
at com.orientechnologies.orient.client.remote.OStorageRemote.createConnectionPool(OStorageRemote.java:1953)
at com.orientechnologies.orient.client.remote.OStorageRemote.openRemoteDatabase(OStorageRemote.java:1517)
at com.orientechnologies.orient.client.remote.OStorageRemote.open(OStorageRemote.java:176)
at com.orientechnologies.orient.client.remote.OStorageRemoteThread.open(OStorageRemoteThread.java:69)
at com.orientechnologies.orient.core.db.raw.ODatabaseRaw.open(ODatabaseRaw.java:96)
at com.orientechnologies.orient.core.db.ODatabaseWrapperAbstract.open(ODatabaseWrapperAbstract.java:47)
at com.orientechnologies.orient.core.db.record.ODatabaseRecordAbstract.open(ODatabaseRecordAbstract.java:114)
at com.orientechnologies.orient.core.db.ODatabaseWrapperAbstract.open(ODatabaseWrapperAbstract.java:47)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTxPooled.<init>(ODatabaseDocumentTxPooled.java:43)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentPool.createResource(ODatabaseDocumentPool.java:39)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentPool.createResource(ODatabaseDocumentPool.java:20)
at com.orientechnologies.orient.core.db.ODatabasePoolBase$1.createNewResource(ODatabasePoolBase.java:67)
at com.orientechnologies.orient.core.db.ODatabasePoolBase$1.createNewResource(ODatabasePoolBase.java:56)
at com.orientechnologies.common.concur.resource.OResourcePool.getResource(OResourcePool.java:66)
at com.orientechnologies.orient.core.db.ODatabasePoolAbstract.acquire(ODatabasePoolAbstract.java:65)
at com.orientechnologies.orient.core.db.ODatabasePoolAbstract.acquire(ODatabasePoolAbstract.java:53)
at com.orientechnologies.orient.core.db.ODatabasePoolBase.acquire(ODatabasePoolBase.java:114)
at pt.webdetails.cpf.persistence.PersistenceEngine.getConnection(PersistenceEngine.java:137)
at pt.webdetails.cpf.persistence.PersistenceEngine.initializeClass(PersistenceEngine.java:418)
at pt.webdetails.cdv.CdvLifecycleListener.reInit(CdvLifecycleListener.java:60)
at pt.webdetails.cdv.CdvLifecycleListener.init(CdvLifecycleListener.java:52)
at org.pentaho.platform.plugin.services.pluginmgr.PlatformPlugin.init(PlatformPlugin.java:189)
at org.pentaho.platform.plugin.services.pluginmgr.DefaultPluginManager.registerPlugin(DefaultPluginManager.java:199)
at org.pentaho.platform.plugin.services.pluginmgr.DefaultPluginManager.reload(DefaultPluginManager.java:128)
at org.pentaho.platform.plugin.services.pluginmgr.PluginAdapter.startup(PluginAdapter.java:42)
at org.pentaho.platform.engine.core.system.PentahoSystem.notifySystemListenersOfStartup(PentahoSystem.java:342)
at org.pentaho.platform.engine.core.system.PentahoSystem.notifySystemListenersOfStartup(PentahoSystem.java:324)
at org.pentaho.platform.engine.core.system.PentahoSystem.init(PentahoSystem.java:291)
at org.pentaho.platform.engine.core.system.PentahoSystem.init(PentahoSystem.java:208)
at org.pentaho.platform.web.http.context.SolutionContextListener.contextInitialized(SolutionContextListener.java:137)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4135)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4630)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:637)
at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:563)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:498)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1277)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:321)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:785)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:445)
at org.apache.catalina.core.StandardService.start(StandardService.java:519)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
at org.apache.catalina.startup.Catalina.start(Catalina.java:581)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
2013-11-07 13:54:04,291 ERROR [org.pentaho.platform.util.logging.Logger] Error end:
2013-11-07 13:54:05,117 WARN [org.apache.axis2.description.java2wsdl.DefaultSchemaGenerator] We don't support method overloading. Ignoring [public java.lang.String serializeModels(org.pentaho.metadata.model.Domain,java.lang.String,boolean) throws java.lang.Exception]
2013-11-07 13:56:07,855 WARN [pt.webdetails.cpf.repository.PentahoRepositoryAccess] hasAccess: .wcdf extension not in acl-files.
2013-11-07 13:56:17,671 ERROR [net.sf.ehcache.store.DiskStore] Corrupt index file. Creating new index.
Use Firebug to look at the CDA data calls, and see what param_name_filter is set to. Then fix the issue; hint: make sure it's a string array in the data source definition of the parameter.