I am trying to upgrade our job from flink 1.4.2 to 1.7.1 but I keep running into timeouts after submitting the job. The flink job runs on our hadoop cluster (version 2.7) with Yarn.
I've seen the following behavior:
Using the same flink-conf.yaml as we used in 1.4.2: 1.5.6 / 1.6.3 / 1.7.1 all versions timeout while 1.4.2 works.
Using 1.5.6 with "mode: legacy" (to switch off flip-6) works
Using 1.7.1 with "mode: legacy" gives timeout (I assume this option was removed but the documentation is outdated? https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#legacy)
When the timeout happens I get the following stacktrace:
INFO class java.time.Instant does not contain a getter for field seconds
INFO class com.bol.fin_hdp.cm1.domain.Cm1Transportable does not contain a getter for field globalId
INFO Submitting job 5af931bcef395a78b5af2b97e92dcffe (detached: false).
INFO ------------------------------------------------------------
INFO The program finished with the following exception:
INFO org.apache.flink.client.program.ProgramInvocationException: The main method caused an error.
INFO at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:545)
INFO at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:420)
INFO at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:404)
INFO at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:798)
INFO at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:289)
INFO at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)
INFO at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1035)
INFO at org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1111)
INFO at java.security.AccessController.doPrivileged(Native Method)
INFO at javax.security.auth.Subject.doAs(Subject.java:422)
INFO at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
INFO at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
INFO at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1111)
INFO Caused by: java.lang.RuntimeException: org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result.
INFO at com.bol.fin_hdp.job.starter.IntervalJobStarter.startJob(IntervalJobStarter.java:43)
INFO at com.bol.fin_hdp.job.starter.IntervalJobStarter.startJobWithConfig(IntervalJobStarter.java:32)
INFO at com.bol.fin_hdp.Main.main(Main.java:8)
INFO at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
INFO at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
INFO at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
INFO at java.lang.reflect.Method.invoke(Method.java:498)
INFO at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:528)
INFO ... 12 more
INFO Caused by: org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result.
INFO at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:258)
INFO at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:464)
INFO at org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:66)
INFO at com.bol.fin_hdp.cm1.job.Job.execute(Job.java:54)
INFO at com.bol.fin_hdp.job.starter.IntervalJobStarter.startJob(IntervalJobStarter.java:41)
INFO ... 19 more
INFO Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
INFO at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$8(RestClusterClient.java:371)
INFO at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
INFO at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
INFO at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
INFO at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
INFO at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$5(FutureUtils.java:216)
INFO at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
INFO at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
INFO at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
INFO at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
INFO at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$1(RestClient.java:301)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
INFO at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:214)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
INFO at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
INFO at java.lang.Thread.run(Thread.java:748)
INFO Caused by: org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could not complete the operation. Number of retries has been exhausted.
INFO at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$5(FutureUtils.java:213)
INFO ... 17 more
INFO Caused by: java.util.concurrent.CompletionException: org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: connection timed out: shd-hdp-b-slave-01...
INFO at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
INFO at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
INFO at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:943)
INFO at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
INFO ... 15 more
INFO Caused by: org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: connection timed out: shd-hdp-b-slave-017.example.com/some.ip.address:46500
INFO at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:212)
INFO ... 7 more
What changed in flip-6 that might cause this behavior and how can I fix this?
For our jobs on YARN w/Flink 1.6, we had to bump up the web.timeout setting via -yD web.timeout=100000.
In our case, there was a firewall between the machine submitting the job and our Hadoop cluster.
In newer Flink versions (1.7 and up) Flink uses REST to submit jobs. The port number for this REST service is random on yarn setups and could not be set.
Flink 1.8.0 introduced a config option to set this to a port or port range using:
rest.bind-port: 55520-55530
Related
I am using Flink1.6.1 and Hadoop2.7.5. on first I start a flink
bin/yarn-session.sh -n 2 -jm 1024 -tm 1024 -d
then submit a task
./bin/flink run ./examples/batch/WordCount.jar -input hdfs://CS-201:9000/LICENSE -output hdfs://CS-201:9000/wordcount-result.txt
I got a error:
[root#CS-201 flink-1.6.1]# ./bin/flink run
./examples/batch/WordCount.jar -input hdfs://CS-201:9000/LICENSE
-output hdfs://CS-201:9000/wordcount-result.txt 2019-05-19 15:31:11,357 INFO org.apache.flink.yarn.cli.FlinkYarnSessionCli
- Found Yarn properties file under /tmp/.yarn-properties-root. 2019-05-19 15:31:11,357 INFO
org.apache.flink.yarn.cli.FlinkYarnSessionCli - Found
Yarn properties file under /tmp/.yarn-properties-root. 2019-05-19
15:31:11,737 INFO org.apache.flink.yarn.cli.FlinkYarnSessionCli
- YARN properties set default parallelism to 2 2019-05-19 15:31:11,737 INFO org.apache.flink.yarn.cli.FlinkYarnSessionCli -
YARN properties set default parallelism to 2 YARN properties set
default parallelism to 2 2019-05-19 15:31:11,777 INFO
org.apache.hadoop.yarn.client.RMProxy -
Connecting to ResourceManager at CS-201/192.168.1.201:8032 2019-05-19
15:31:11,887 INFO org.apache.flink.yarn.cli.FlinkYarnSessionCli
- No path for the flink jar passed. Using the location of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
2019-05-19 15:31:11,887 INFO
org.apache.flink.yarn.cli.FlinkYarnSessionCli - No
path for the flink jar passed. Using the location of class
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
2019-05-19 15:31:11,891 WARN
org.apache.flink.yarn.AbstractYarnClusterDescriptor -
Neither the HADOOP_CONF_DIR nor the YARN_CONF_DIR environment variable
is set.The Flink YARN Client needs one of these to be set to properly
load the Hadoop configuration for accessing YARN. 2019-05-19
15:31:11,979 INFO org.apache.flink.yarn.AbstractYarnClusterDescriptor
- Found application JobManager host name 'cs-202' and port '52389' from supplied application id 'application_1558248666499_0003' Starting
execution of program
------------------------------------------------------------ The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: Could not
retrieve the execution result. (JobID:
471f0c2d047aba74ea621c5bfe782cbf) at
org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:260)
at
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:486)
at
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:474)
at
org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
at
org.apache.flink.examples.java.wordcount.WordCount.main(WordCount.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
at
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
at
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426)
at
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:804)
at
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)
at
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1044)
at
org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1120)
at java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at
org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1120)
Caused by: org.apache.flink.runtime.client.JobSubmissionException:
Failed to submit JobGraph. at
org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$8(RestClusterClient.java:379)
at
java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
at
java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
at
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at
org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$5(FutureUtils.java:213)
at
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at
java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
at
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
at
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) Caused by:
java.util.concurrent.CompletionException:
org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could
not complete the operation. Exception is not retryable. at
java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:326)
at
java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:338)
at
java.util.concurrent.CompletableFuture.uniRelay(CompletableFuture.java:911)
at
java.util.concurrent.CompletableFuture$UniRelay.tryFire(CompletableFuture.java:899)
... 12 more Caused by:
org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could
not complete the operation. Exception is not retryable. ... 10 more
Caused by: java.util.concurrent.CompletionException:
org.apache.flink.runtime.rest.util.RestClientException: [Job
submission failed.] at
java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:326)
at
java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:338)
at
java.util.concurrent.CompletableFuture.uniRelay(CompletableFuture.java:911)
at
java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:953)
at
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
... 4 more Caused by:
org.apache.flink.runtime.rest.util.RestClientException: [Job
submission failed.] at
org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:310)
at
org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:294)
at
java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:952)
... 5 more
why it happen? and How to fix that..
Hiveserver2 does not start after installing HDP 2.6.4.0-91 using cloudbreak on AWS.
Start the hiveserver2 in the Ambari UI and check the contents of /var/log/hive/hiveserver2.log.
Below is the error log.
Any help would be appreciated.
Contents of hiveserver2.log
2018-03-08 04:41:53,345 WARN [main-EventThread]: server.HiveServer2 (HiveServer2.java:process(343)) - This instance of HiveServer2 has been removed from the list of server instances available for dynamic service discovery. The last client session has ended - will shutdown now.
2018-03-08 04:41:53,347 INFO [main]: zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x16203aad5af0040 closed
2018-03-08 04:41:53,347 INFO [main]: server.HiveServer2 (HiveServer2.java:removeServerInstanceFromZooKeeper(361)) - Server instance removed from ZooKeeper.
2018-03-08 04:41:53,348 INFO [main-EventThread]: server.HiveServer2 (HiveServer2.java:stop(405)) - Shutting down HiveServer2
2018-03-08 04:41:53,348 INFO [main-EventThread]: server.HiveServer2 (HiveServer2.java:removeServerInstanceFromZooKeeper(361)) - Server instance removed from ZooKeeper.
2018-03-08 04:41:53,348 INFO [main-EventThread]: zookeeper.ClientCnxn (ClientCnxn.java:run(524)) - EventThread shut down
2018-03-08 04:41:53,348 WARN [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(508)) - Error starting HiveServer2 on attempt 1, will retry in 60 seconds
org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown. Application application_1520480101488_0046 failed 2 times due to AM Container for appattempt_1520480101488_0046_000002 exited with exitCode: -1000
For more detailed output, check the application tracking page: http://ip-10-0-91-7.ap-northeast-2.compute.internal:8088/cluster/app/application_1520480101488_0046 Then click on links to logs of each attempt.
Diagnostics: ExitCodeException exitCode=2: tar: Removing leading `/' from member names
tar: Skipping to next header
gzip: /hadoopfs/fs1/yarn/nodemanager/filecache/60_tmp/tmp_tez.tar.gz: invalid compressed data--format violated
tar: Exiting with failure status due to previous errors
Failing this attempt. Failing the application.
at org.apache.tez.client.TezClient.waitTillReady(TezClient.java:699)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:218)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:116)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.startPool(TezSessionPoolManager.java:76)
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:488)
at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:87)
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:720)
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:593)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
I had exactly the same issue with HDP on AWS. FYI, In my case the issue was with HDP version 2.6.4.5-2. I'm going to show how I fixed using this version because it is the latest at this time.
As the error log shows the problem is with tez.tar.gz that is corrupted then YARN is unable to decompress it in the YARN container.
This tez.tar.gz file is copied from the hdfs:///hdp/apps/<hdp_version>/tez/tez.tar.gz.
To reproduce the error and confirm that this file is corrupted, you can run the following command:
sudo su
su hdfs
hdfs dfs -get /hdp/apps/2.6.4.5-2/tez.tar.gz
tar -xvzf tez.tar.gz
You will get the following error:
gzip: stdin: invalid compressed data--format violated
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
The fix is pretty simple, you must just replace the HDFS file with the one that you have on your local file-system running the following command:
hdfs dfs -rm /hdp/apps/2.6.4.5-2/tez/tez.tar.gz
hdfs dfs -put /usr/hdp/current/tez-client/lib/tez.tar.gz /hdp/apps/2.6.4.5-2/tez/tez.tar.gz
Now restart Hive Server 2 service and done!
NOTE: If something similar happens with other services you can do the same thing. Please check the following link that has more details: https://community.hortonworks.com/articles/30096/foxing-broken-targz-and-jar-files-in-hdp-24.html
Hope this helps!
I'm tearing my hair off with IBM MobileFirst Platform 8.0 on deploying the adapter.
I'm quite new to this platform and trying to following the instruction here
http://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/adapters/creating-adapters/#creating-adapters-using-mobilefirst-cli
I installed the mfp server, the mfpdev-cli, installed maven, created an adapter (which is just a default adapter). Then, run
mfpdev adapter build
The operation was success.
However, when I tried to deploy to the mfp server using mfpdev, there was an error in the console and the adapter was not deployed.
Here's the error :
[INFO] -----------------------------------------------------------------------
[INFO] Building SampleAdapter 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- adapter-maven-plugin:8.0.2018021101:deploy (default-cli) # SampleAdapter ---
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.529 s
[INFO] Finished at: 2561-03-08T19:20:29+07:00
[INFO] Final Memory: 8M/112M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.ibm.mfp:adapter-maven-plugin:8.0.2018021101:deploy (default-cli) on project SampleAdapter: Unexpected response from http://localhost:9080/mfpadmin/management-apis/2.0/runtimes/mfp/adapters -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Error deploying adapter
undefined
Error: An error occurred during an attempt to deploy the adapter. See the
preceding messages for details.
And when I looked to the mfp server log, it showed this error.
[ERROR ] RuntimeMBeanCallable.call() exception
java.lang.reflect.UndeclaredThrowableException
[err] java.lang.reflect.UndeclaredThrowableException
[err] at com.sun.proxy.$Proxy342.changeDeploymentState(Unknown Source)
[err] at com.ibm.mfp.admin.actions.ArtifactDeploymentTransaction.prepareMBean(ArtifactDeploymentTransaction.java:822)
[err] at com.ibm.mfp.admin.actions.util.RuntimeMBeanWorkerThreadCaller$RuntimeMBeanCallale.call(RuntimeMBeanWorkerThreadCaller.java:76)
[err] at com.ibm.mfp.admin.actions.util.RuntimeMBeanWorkerThreadCaller.callSynchronouslyRuntimeMBeanWorkerThreadCaller.java:206)
[err] at com.ibm.mfp.admin.actions.util.RuntimeMBeanPoolCaller.callRuntimeMBeans(RuntimeMBeanPoolCaller.java:91)
.
.
.
.
[err] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
[err] at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:501)
[err] at java.lang.Throwable.readObject(Throwable.java:914)
[err] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[err] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[err] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[err] at java.lang.reflect.Method.invoke(Method.java:497)
[err] at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
[err] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1900)
[err] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
[err] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
[err] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
[err] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
[err] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
[err] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
[err] at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
[err] at com.ibm.ws.jmx.connector.server.rest.JSONSerializationHelper.readObject(JSONSerializationHelper.java:69)
[err] at [internal classes]
[err] ... 102 more
[ERROR ] Unable to delete configuration with id ADAPTER_CONTENT due to exception FWLSE3208E: An invalid status code "404" was returned.
The response content is "{"reason":"configuration_not_found","details":"CNFSRVE115: configuration with id 'ADAPTER_CONTENT' for schema 'mfp_default_schema' with version '1.0' not found"}".
However, I did skim back to the server starting log, there seemed to have some [err], which I'm not sure whether it is an error or not.
[WARNING ] CWNEN0070W: The javax.ws.rs.CookieParam annotation class will not be recognized because it was loaded from the file:/D:/MobileFirst/mfp server/usr/servers/mfp/apps/mfp-push-service.war location rather than from a product class loader.
[err] 670 PushPU-derby INFO [Default Executor-thread-2] openjpa.Runtime - Starting OpenJPA 2.4.1
[err] 1363 PushPU-derby INFO [Default Executor-thread-2] openjpa.jdbc.JDBC - Using dictionary class "org.apache.openjpa.jdbc.sql.DerbyDictionary".
[err] 2688 PushPU-derby INFO [Default Executor-thread-2] openjpa.jdbc.JDBC - Connected to Apache Derby version 10.10 using JDBC driver Apache Derby Embedded JDBC Driver version 10.11.1.1 - (1616546).
[AUDIT ] CWWKZ0001I: Application mfp started in 47.549 seconds.
[AUDIT ] CWWKZ0001I: Application imfpush started in 49.088 seconds.
[19:12:51,128][WARN ][org.elasticsearch.bootstrap] JNA not found. native methods will be disabled.
[err] 24634 WorklightManagementPU-derby INFO [Default Executor-thread-5] openjpa.Runtime - Starting OpenJPA 2.4.1
[err] 24634 WorklightManagementPU-derby INFO [Default Executor-thread-5] openjpa.jdbc.JDBC - Using dictionary class "org.apache.openjpa.jdbc.sql.DerbyDictionary" (Apache Derby 10.11.1.1 - (1616546) ,Apache Derby Embedded JDBC Driver 10.11.1.1 - (1616546)).
[err] 24650 WorklightManagementPU-derby INFO [Default Executor-thread-5] openjpa.jdbc.JDBC - Connected to Apache Derby version 10.10 using JDBC driver Apache Derby Embedded JDBC Driver version 10.11.1.1 - (1616546).
[AUDIT ] CWWKZ0001I: Application analytics-service started in 58.394 seconds.
I couldn't find any solution or resource regarded to this problem. There are just some few thread talking about this but they couldn't manage to solve it either.
Configuration with id 'ADAPTER_CONTENT' for schema 'mfp_default_schema' with version '1.0' not found
and
https://www.cpume.com/question/fezseeoz-unable-to-delete-configuration-with-id-adapter-content-due-to-exception-fwlse320.html
I also tried to deploy from MobileFirst Application Web Console. It also failed.
My machine env are as follow:
OS : Win7 SP1
JDK : 1.8.0_60-b27
Maven : 3.5.2
MFP Server : 8.0.0.00-20180220-083852
mfpdev-cli : 8.0.0-2017102406
Anyone has experienced this before?
All helps and suggestions would be greatly appreciated...
I followed the doc:https://www.jetbrains.com/help/idea/preparing-to-use-struts-2.html
Step 1. New Project
Step 2. Finish and fix problem in project stucture panel
Step 3. Run 'Tomcat'
Step 4. Get error
Some details:
Tomcat 8.5.14 / Struts 2 2.5.13(auto-download)
Server Log Output:
"C:\Program Files\Java\apache-tomcat-8.5.14\bin\catalina.bat" run
[2017-11-22 02:52:46,624] Artifact HelloStruts:war exploded: Waiting for server connection to start artifact deployment...
Using CATALINA_HOME: "C:\Program Files\Java\apache-tomcat-8.5.14"
Using CATALINA_TMPDIR: "C:\Program Files\Java\apache-tomcat-8.5.14\temp"
Using JRE_HOME: "C:\Program Files\Java\jdk1.8.0_111"
Using CLASSPATH: "C:\Program Files\Java\apache-tomcat-8.5.14\bin\bootstrap.jar;C:\Program Files\Java\apache-tomcat-8.5.14\bin\tomcat-juli.jar"
22-Nov-2017 14:52:47.451 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version: Apache Tomcat/8.5.14
22-Nov-2017 14:52:47.454 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Apr 13 2017 12:55:45 UTC
22-Nov-2017 14:52:47.454 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number: 8.5.14.0
22-Nov-2017 14:52:47.454 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Windows 10
22-Nov-2017 14:52:47.454 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 10.0
22-Nov-2017 14:52:47.454 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: amd64
22-Nov-2017 14:52:47.454 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home: C:\Program Files\Java\jdk1.8.0_111\jre
22-Nov-2017 14:52:47.454 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version: 1.8.0_111-b14
22-Nov-2017 14:52:47.455 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor: Oracle Corporation
22-Nov-2017 14:52:47.455 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE: C:\Users\dram2\.IntelliJIdea2017.2\system\tomcat\Tomcat_8_5_14_HelloStruts
22-Nov-2017 14:52:47.455 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME: C:\Program Files\Java\apache-tomcat-8.5.14
22-Nov-2017 14:52:47.455 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.config.file=C:\Users\dram2\.IntelliJIdea2017.2\system\tomcat\Tomcat_8_5_14_HelloStruts\conf\logging.properties
22-Nov-2017 14:52:47.455 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
22-Nov-2017 14:52:47.455 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcom.sun.management.jmxremote=
22-Nov-2017 14:52:47.455 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcom.sun.management.jmxremote.port=1099
22-Nov-2017 14:52:47.455 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcom.sun.management.jmxremote.ssl=false
22-Nov-2017 14:52:47.455 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcom.sun.management.jmxremote.authenticate=false
22-Nov-2017 14:52:47.455 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.rmi.server.hostname=127.0.0.1
22-Nov-2017 14:52:47.455 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djdk.tls.ephemeralDHKeySize=2048
22-Nov-2017 14:52:47.455 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.protocol.handler.pkgs=org.apache.catalina.webresources
22-Nov-2017 14:52:47.455 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.base=C:\Users\dram2\.IntelliJIdea2017.2\system\tomcat\Tomcat_8_5_14_HelloStruts
22-Nov-2017 14:52:47.456 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.home=C:\Program Files\Java\apache-tomcat-8.5.14
22-Nov-2017 14:52:47.456 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.io.tmpdir=C:\Program Files\Java\apache-tomcat-8.5.14\temp
22-Nov-2017 14:52:47.456 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent Loaded APR based Apache Tomcat Native library 1.2.12 using APR version 1.5.2.
22-Nov-2017 14:52:47.456 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
22-Nov-2017 14:52:47.456 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR/OpenSSL configuration: useAprConnector [false], useOpenSSL [true]
22-Nov-2017 14:52:48.183 INFO [main] org.apache.catalina.core.AprLifecycleListener.initializeSSL OpenSSL successfully initialized (OpenSSL 1.0.2k 26 Jan 2017)
22-Nov-2017 14:52:48.278 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
22-Nov-2017 14:52:48.302 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
22-Nov-2017 14:52:48.305 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["ajp-nio-8009"]
22-Nov-2017 14:52:48.307 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
22-Nov-2017 14:52:48.308 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 1186 ms
22-Nov-2017 14:52:48.344 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service Catalina
22-Nov-2017 14:52:48.344 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.5.14
22-Nov-2017 14:52:48.355 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
22-Nov-2017 14:52:48.367 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
22-Nov-2017 14:52:48.371 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 63 ms
Connected to server
[2017-11-22 02:52:48,749] Artifact HelloStruts:war exploded: Artifact is being deployed, please wait...
22-Nov-2017 14:52:49.400 INFO [RMI TCP Connection(3)-127.0.0.1] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
22-Nov-2017 14:52:49.424 SEVERE [RMI TCP Connection(3)-127.0.0.1] org.apache.catalina.core.StandardContext.startInternal One or more Filters failed to start. Full details will be found in the appropriate container log file
22-Nov-2017 14:52:49.424 SEVERE [RMI TCP Connection(3)-127.0.0.1] org.apache.catalina.core.StandardContext.startInternal Context [] startup failed due to previous errors
[2017-11-22 02:52:49,437] Artifact HelloStruts:war exploded: Error during artifact deployment. See server log for details.
22-Nov-2017 14:52:58.363 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory C:\Program Files\Java\apache-tomcat-8.5.14\webapps\manager
22-Nov-2017 14:52:58.414 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory C:\Program Files\Java\apache-tomcat-8.5.14\webapps\manager has finished in 50 ms
These lines might be important:
22-Nov-2017 14:52:49.400 INFO [RMI TCP Connection(3)-127.0.0.1] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
22-Nov-2017 14:52:49.424 SEVERE [RMI TCP Connection(3)-127.0.0.1] org.apache.catalina.core.StandardContext.startInternal One or more Filters failed to start. Full details will be found in the appropriate container log file
22-Nov-2017 14:52:49.424 SEVERE [RMI TCP Connection(3)-127.0.0.1] org.apache.catalina.core.StandardContext.startInternal Context [] startup failed due to previous errors
[2017-11-22 02:52:49,437] Artifact HelloStruts:war exploded: Error during artifact deployment. See server log for details.
Tomcat Localhost Log
24-Nov-2017 10:40:21.993 严重 [RMI TCP Connection(4)-127.0.0.1] org.apache.catalina.core.StandardContext.filterStart Exception starting filter struts2
java.lang.ClassNotFoundException: org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1285)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1119)
at org.apache.catalina.core.DefaultInstanceManager.loadClass(DefaultInstanceManager.java:511)
at org.apache.catalina.core.DefaultInstanceManager.loadClassMaybePrivileged(DefaultInstanceManager.java:492)
at org.apache.catalina.core.DefaultInstanceManager.newInstance(DefaultInstanceManager.java:118)
at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258)
at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:105)
at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4590)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5233)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:752)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:728)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
at org.apache.catalina.startup.HostConfig.manageApp(HostConfig.java:1702)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:300)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
at org.apache.catalina.mbeans.MBeanFactory.createStandardContext(MBeanFactory.java:482)
at org.apache.catalina.mbeans.MBeanFactory.createStandardContext(MBeanFactory.java:431)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:300)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
at sun.rmi.transport.Transport$1.run(Transport.java:200)
at sun.rmi.transport.Transport$1.run(Transport.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Project Structure - Modules
Project Structure - Facets
And Artifacts
Which might be important is that, I fix a problem showed in 'Project Structue - Problem' panel, but it still cannot run.
The tomcat cannot working and I cannot access 'http://localhost:8080/index.jsp'.
But if I new a project without struts 2, tomcat is working.
If there's any other info should I provide, please let me know by comments.
How can I fix it, any advice would be helpful, thank you!
My solution to solve it is not direct but simple enough.
Just download the struts 2 on its website.
Choose 'Use library' when newing a project.
Everything works right.
I still don't know why IntelliJ downloaded struts 2 cannot work, but at least I can do my stuff on.
Here is step by step Segment Fault
I downloaded Pig from apache, I have installed it, tried to run it using pig -x local
This is what I get:
15/12/10 15:06:26 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
15/12/10 15:06:26 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
15/12/10 15:06:26 INFO pig.ExecTypeProvider: Picked MAPREDUCE as the ExecType
2015-12-10 15:06:26,063 [main] INFO org.apache.pig.Main - Apache Pig version 0.15.0 (r1682971) compiled Jun 01 2015, 11:44:35
2015-12-10 15:06:26,063 [main] INFO org.apache.pig.Main - Logging error messages to: /usr/local/pig/pig_1449756386061.log
2015-12-10 15:06:26,097 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/ubuntu/.pigbootup not found
2015-12-10 15:06:26,132 [main] ERROR org.apache.pig.Main - ERROR 2998: Unhandled internal error. Found interface jline.Terminal, but class was expected
Details at logfile: /usr/local/pig/pig_1449756386061.log
2015-12-10 15:06:26,157 [main] INFO org.apache.pig.Main - Pig script completed in 206 milliseconds (206 ms)
My log file contains the following:
Error before Pig is launched
----------------------------
ERROR 2998: Unhandled internal error. Found interface jline.Terminal, but class was expected
java.lang.IncompatibleClassChangeError: Found interface jline.Terminal, but class was expected
at jline.ConsoleReader.<init>(ConsoleReader.java:174)
at jline.ConsoleReader.<init>(ConsoleReader.java:169)
at org.apache.pig.Main.run(Main.java:556)
at org.apache.pig.Main.main(Main.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
================================================================================
After I downloaded and extracted the package, I did the following (pig is in /usr/local/pig):
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_79
196 export PIG_PREFIX=/usr/local/pig
197 export PATH=$PATH:$PIG_PREFIX/bin
Any ideas what is wrong?
Thanks,
Serban
Add this -
export HADOOP_USER_CLASSPATH_FIRST=true
Refer
https://issues.apache.org/jira/browse/PIG-3851
hive startup -[ERROR] Terminal initialization failed; falling back to unsupported