I just installed tomcat 8 on ubuntu with openjdk 7, but when accessing localhost: 8080, it shows a blank page, shows no errors, only blank pages.
I attach logs errors here
18-Feb-2018 08:53:45.302 SEVERE [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Error deploying web application directory /var/lib/tomcat8/webapps/ROOT
java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[]]
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:729)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:701)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:717)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1091)
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1830)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
18-Feb-2018 08:53:45.304 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /var/lib/tomcat8/webapps/ROOT has finished in 1,269 ms
18-Feb-2018 08:53:45.318 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
18-Feb-2018 08:53:45.323 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 1334 ms
Related
I have started the Pentaho BI server and the response I get is that Tomcat server has been started.
But while logging into http://localhost:8080/pentaho.
It is prompting for password. As per the documentation/web search I have tried with admin/admin, admin/password, admin/pentaho.
None of them is working. Any guesses ?
When I start the BI server, I get the response as :
01HW993798:pentaho-server tcssig$ ./start-pentaho.sh
WARNING: Using java from path
DEBUG: _PENTAHO_JAVA_HOME=
DEBUG: _PENTAHO_JAVA=java
Using CATALINA_BASE: /Users/tcssig/Downloads/pentaho-server/tomcat
Using CATALINA_HOME: /Users/tcssig/Downloads/pentaho-server/tomcat
Using CATALINA_TMPDIR: /Users/tcssig/Downloads/pentaho-server/tomcat/temp
Using JRE_HOME: /Library/Java/JavaVirtualMachines/jdk1.8.0_101.jdk/Contents/Home
Using CLASSPATH: /Users/tcssig/Downloads/pentaho-server/tomcat/bin/bootstrap.jar:/Users/tcssig/Downloads/pentaho-server/tomcat/bin/tomcat-juli.jar
Tomcat started.
Pentaho.log file is as below.
2016-11-30 16:36:32,907 INFO [org.pentaho.platform.engine.core.system.status.PeriodicStatusLogger] Caution, the system is initializing. Do not shut down or restart the system at this time.
2016-11-30 16:36:33,759 INFO [org.pentaho.platform.osgi.OSGIBoot] Checking to see if org.pentaho.clean.karaf.cache is enabled
2016-11-30 16:36:33,993 INFO [org.pentaho.platform.osgi.KarafInstance]
*******************************************************************************
*** Karaf Instance Number: 1 at /Users/tcssig/Downloads/pentaho-server/pent ***
*** aho-solutions/system/karaf/caches/default/data-1 ***
*** Karaf Port:8802 ***
*** OSGI Service Port:9051 ***
*** JMX RMI Registry Port:11099 ***
*** RMI Server Port:44445 ***
*******************************************************************************
2016-11-30 16:37:02,914 INFO [org.pentaho.platform.engine.core.system.status.PeriodicStatusLogger] Caution, the system is initializing. Do not shut down or restart the system at this time.
2016-11-30 16:37:22,379 INFO [org.pentaho.platform.engine.core.system.status.PeriodicStatusLogger] The system has finished initializing.
2016-11-30 16:37:26,177 INFO [org.pentaho.platform.engine.core.system.status.PeriodicStatusLogger] The system has finished initializing.
2016-11-30 16:37:27,350 ERROR [org.pentaho.platform.util.logging.Logger] Error: Pentaho
2016-11-30 16:37:27,351 ERROR [org.pentaho.platform.util.logging.Logger] misc-org.pentaho.platform.engine.core.system.PentahoSystem: PentahoSystem.ERROR_0015 - Error while trying to execute shutdown sequence for org.pentaho.platform.plugin.services.pluginmgr.PluginAdapter
java.lang.IllegalStateException: Service already unregistered.
at org.apache.felix.framework.ServiceRegistrationImpl.unregister(ServiceRegistrationImpl.java:124)
at org.pentaho.platform.engine.core.system.objfac.OSGIRuntimeObjectFactory$OSGIPentahoObjectRegistration.remove(OSGIRuntimeObjectFactory.java:178)
at org.pentaho.platform.plugin.services.pluginmgr.PentahoSystemPluginManager.unloadPlugins(PentahoSystemPluginManager.java:225)
at org.pentaho.platform.plugin.services.pluginmgr.PentahoSystemPluginManager.unloadAllPlugins(PentahoSystemPluginManager.java:917)
at org.pentaho.platform.plugin.services.pluginmgr.PluginAdapter.shutdown(PluginAdapter.java:47)
at org.pentaho.platform.engine.core.system.PentahoSystem.shutdown(PentahoSystem.java:1071)
at org.pentaho.platform.web.http.context.SolutionContextListener.contextDestroyed(SolutionContextListener.java:282)
at org.apache.catalina.core.StandardContext.listenerStop(StandardContext.java:4900)
at org.apache.catalina.core.StandardContext.stopInternal(StandardContext.java:5537)
at org.apache.catalina.util.LifecycleBase.stop(LifecycleBase.java:221)
at org.apache.catalina.core.ContainerBase$StopChild.call(ContainerBase.java:1423)
at org.apache.catalina.core.ContainerBase$StopChild.call(ContainerBase.java:1412)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-11-30 16:37:27,354 ERROR [org.pentaho.platform.util.logging.Logger] Error end:
2016-11-30 16:37:27,356 INFO [org.pentaho.platform.engine.core.system.status.PeriodicStatusLogger] The system has finished initializing.
2016-11-30 16:37:27,356 WARN [org.pentaho.platform.web.http.context.PentahoSolutionSpringApplicationContext] Exception thrown from ApplicationListener handling ContextClosedEvent
java.lang.IllegalStateException: Service already unregistered.
at org.apache.felix.framework.ServiceRegistrationImpl.unregister(ServiceRegistrationImpl.java:124)
at org.pentaho.platform.engine.core.system.objfac.OSGIRuntimeObjectFactory$OSGIPentahoObjectRegistration.remove(OSGIRuntimeObjectFactory.java:178)
at org.pentaho.platform.engine.core.system.objfac.spring.PublishedBeanRegistry$1.onApplicationEvent(PublishedBeanRegistry.java:125)
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:166)
at org.springframework.context.event.SimpleApplicationEventMulticaster$1.run(SimpleApplicationEventMulticaster.java:133)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:130)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:382)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:336)
at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:989)
at org.springframework.context.support.AbstractApplicationContext.close(AbstractApplicationContext.java:956)
at org.springframework.web.context.ContextLoader.closeWebApplicationContext(ContextLoader.java:581)
at org.springframework.web.context.ContextLoaderListener.contextDestroyed(ContextLoaderListener.java:116)
at org.apache.catalina.core.StandardContext.listenerStop(StandardContext.java:4900)
at org.apache.catalina.core.StandardContext.stopInternal(StandardContext.java:5537)
at org.apache.catalina.util.LifecycleBase.stop(LifecycleBase.java:221)
at org.apache.catalina.core.ContainerBase$StopChild.call(ContainerBase.java:1423)
at org.apache.catalina.core.ContainerBase$StopChild.call(ContainerBase.java:1412)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Try Changing the AJP and Http port numbers in tomcat/conf/server.xml
I tried to run a spark 1.6.0 (spark-1.6.0-bin-hadoop2.6) program on local mode using intellij idea .It has the error below.(Chinese means you can not specify the address of the requested)
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/09/17 16:18:25 INFO SparkContext: Running Spark version 1.6.0
16/09/17 16:18:25 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/09/17 16:18:25 INFO SecurityManager: Changing view acls to: ron
16/09/17 16:18:25 INFO SecurityManager: Changing modify acls to: ron
16/09/17 16:18:25 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ron); users with modify permissions: Set(ron)
16/09/17 16:18:26 WARN Utils: Service 'sparkDriver' could not bind on port 0. Attempting port 1.
16/09/17 16:18:26 ERROR SparkContext: Error initializing SparkContext.
java.net.BindException: 无法指定被请求的地址: Service 'sparkDriver' failed after 16 retries!
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1089)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
16/09/17 16:18:26 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" java.net.BindException: 无法指定被请求的地址: Service 'sparkDriver' failed after 16 retries!
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1089)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
1.Get your hostname by using "hostname" command.
2.Make an entry in the /etc/hosts file for your hostname if not present as follows:
127.0.0.1 your_hostname
When I run Apache twill HelloWorld example as stated in http://twill.incubator.apache.org/GettingStarted.html I get this log:
10:44:47.888 [ STARTING] DEBUG o.a.twill.yarn.YarnTwillController -
Yarn application status for HelloWorldRunnable
application_1443786884805_0185: ACCEPTED
10:44:48.383 [ STARTING-SendThread(hadice.dev:2181)] DEBUG org.apache.zookeeper.ClientCnxn -
Got ping response for sessionid: 0x15028da0ff0009d after 0ms
10:44:48.889 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client -
IPC Client (431687661) connection to gin1.dev/10.0.22.129:8032 from root sending #45
10:44:48.894 [IPC Client (431687661) connection to gin1.dev/10.0.22.129:8032 from root]
DEBUG org.apache.hadoop.ipc.Client -
IPC Client (431687661) connection to gin1.dev/10.0.22.129:8032 from root got value #45
10:44:48.894 [ STARTING] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: getApplicationReport took 6ms
10:44:48.895 [ STARTING] DEBUG o.a.twill.yarn.YarnTwillController -
Yarn application status for HelloWorldRunnable application_1443786884805_0185:
ACCEPTED
10:44:49.711 [Kafka-Consumer-log-0] DEBUG o.a.t.i.k.client.SimpleKafkaConsumer -
No leader for topic partition TopicPartition{topic=log, partition=0}.
10:44:49.895 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client -
IPC Client (431687661) connection to gin1.dev/10.0.22.129:8032 from root sending #46
10:44:49.902 [IPC Client (431687661) connection to gin1.dev/10.0.22.129:8032 from root] DEBUG org.apache.hadoop.ipc.Client -
IPC Client (431687661) connection to gin1.dev/10.0.22.129:8032 from root got value #46
10:44:49.902 [ STARTING] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: getApplicationReport took 7ms
10:44:49.902 [ STARTING] DEBUG o.a.twill.yarn.YarnTwillController -
Yarn application status for HelloWorldRunnable application_1443786884805_0185:
FAILED
10:44:50.902 [ STARTING] INFO o.a.twill.yarn.YarnTwillController -
Yarn application HelloWorldRunnable application_1443786884805_0185 is in state
FAILED
10:44:50.903 [ STARTING] INFO o.a.twill.yarn.YarnTwillController -
Yarn application HelloWorldRunnable application_1443786884805_0185 is not in running state. Shutting down controller.
10:44:50.907 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client -
IPC Client (431687661) connection to gin1.dev/10.0.22.129:8032 from root sending #47
10:44:50.908 [ STARTING-SendThread(hadice.dev:2181)] DEBUG org.apache.zookeeper.ClientCnxn -
Reading reply sessionid:0x15028da0ff0009d, packet::
clientPath:/HelloWorldRunnable/instances/5e72cb8c-cf94-4718-a44b-ec983304efa0
serverPath:/HelloWorldRunnable/instances/5e72cb8c-cf94-4718-a44b-ec983304efa0
finished:false header:: 10,3 replyHeader:: 10,1797,-101 request:: '/HelloWorldRunnable/instances/5e72cb8c-cf94-4718-a44b-ec983304efa0,T response::
10:44:50.913 [IPC Client (431687661) connection to gin1.dev/10.0.22.129:8032 from root] DEBUG org.apache.hadoop.ipc.Client -
IPC Client (431687661) connection to gin1.dev/10.0.22.129:8032 from root got value #47
10:44:50.913 [ STOPPING] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: getApplicationReport took 6ms
10:44:50.916 [ STOPPING] DEBUG o.a.twill.yarn.YarnTwillController -
Yarn application HelloWorldRunnable application_1443786884805_0185 completed with status
FAILED
The Application gets ACCEPTed but then transitions to the "FAILED" state.
The YARN Web UI shows this as the error (very unspecific):
Application application_1443786884805_0185 failed 2 times due to AM Container for appattempt_1443786884805_0185_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://gin1.dev:8088/proxy/application_1443786884805_0185/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1443786884805_0185_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
And the node log shows:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2693)
at java.lang.Class.privateGetMethodRecursive(Class.java:3040)
at java.lang.Class.getMethod0(Class.java:3010)
at java.lang.Class.getMethod(Class.java:1776)
at org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:85)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.conf.Configuration
at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 6 more
What can be wrong? I supposed that hadoop classes should be on classpath in yarn application. How to fix that?
I assume you are launching the application with the result of hadoop classpath in the classpath (as shown in the example). You need to make sure the result of running hadoop classpath on launcher box points to local paths that contain hadoop jars. The other thing you might want to check is the stdout file in the container log directory. It prints out classpath that it uses to launch the application and see if you find hadoop jars there.
What's the reason for the following error, and how can it be solved?
10:54:14,546 INFO [org.jboss.as.server.deployment] (MSC service thread 1-7) JBAS015876: Starting deployment of "oneflexi.ear"
10:54:14,998 INFO [org.jboss.as.server.deployment] (MSC service thread 1-3) JBAS015876: Starting deployment of "LinkWEB.war"
10:54:14,998 INFO [org.jboss.as.server.deployment] (MSC service thread 1-1) JBAS015876: Starting deployment of "LinkEJB.jar"
10:54:15,225 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-4) MSC00001: Failed to start service jboss.deployment.subunit."oneflexi.ear"."LinkWEB.war".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.subunit."oneflexi.ear"."LinkWEB.war".PARSE: Failed to process phase PARSE of subdeployment "LinkWEB.war" of deployment "oneflexi.ear"
at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:119) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final]
at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.2.GA.jar:1.0.2.GA]
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.2.GA.jar:1.0.2.GA]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_15]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_15]
at java.lang.Thread.run(Thread.java:722) [rt.jar:1.7.0_15]
Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: JBAS018014: Failed to parse XML descriptor "/content/oneflexi.ear/LinkWEB.war/WEB-INF/jboss-web.xml" at [4,1]
at org.jboss.as.web.deployment.JBossWebParsingDeploymentProcessor.deploy(JBossWebParsingDeploymentProcessor.java:77)
at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final]
... 5 more
Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[4,1]
Message: Unexpected element 'error-page' encountered
at org.jboss.metadata.parser.util.MetaDataElementParser.unexpectedElement(MetaDataElementParser.java:108)
at org.jboss.metadata.parser.jbossweb.JBossWebMetaDataParser.parse(JBossWebMetaDataParser.java:211)
at org.jboss.as.web.deployment.JBossWebParsingDeploymentProcessor.deploy(JBossWebParsingDeploymentProcessor.java:69)
... 6 more
10:54:15,266 INFO [org.jboss.as] (MSC service thread 1-5) JBAS015951: Admin console listening on http://127.0.0.1:9990
10:54:15,267 ERROR [org.jboss.as] (MSC service thread 1-5) JBAS015875: JBoss AS 7.1.1.Final "Brontes" started (with errors) in 2477ms - Started 145 of 223 services (3 services failed or missing dependencies, 74 services are passive or on-demand)
10:54:15,469 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS015870: Deploy of deployment "oneflexi.ear" was rolled back with failure message {"JBAS014671: Failed services" => {"jboss.deployment.subunit.\"oneflexi.ear\".\"LinkWEB.war\".PARSE" => "org.jboss.msc.service.StartException in service jboss.deployment.subunit.\"oneflexi.ear\".\"LinkWEB.war\".PARSE: Failed to process phase PARSE of subdeployment \"LinkWEB.war\" of deployment \"oneflexi.ear\""}}
10:54:15,478 INFO [org.jboss.as.server.deployment] (MSC service thread 1-8) JBAS015877: Stopped deployment LinkEJB.jar in 8ms
10:54:15,485 INFO [org.jboss.as.server.deployment] (MSC service thread 1-8) JBAS015877: Stopped deployment LinkWEB.war in 15ms
10:54:15,490 INFO [org.jboss.as.server.deployment] (MSC service thread 1-8) JBAS015877: Stopped deployment oneflexi.ear in 20ms
10:54:15,491 INFO [org.jboss.as.controller] (DeploymentScanner-threads - 2) JBAS014774: Service status report
JBAS014777: Services which failed to start: service jboss.deployment.subunit."oneflexi.ear"."LinkWEB.war".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.subunit."oneflexi.ear"."LinkWEB.war".PARSE: Failed to process phase PARSE of subdeployment "LinkWEB.war" of deployment "oneflexi.ear"
10:54:15,492 ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) {"JBAS014653: Composite operation failed and was rolled back. Steps that failed:" => {"Operation step-2" => {"JBAS014671: Failed services" => {"jboss.deployment.subunit.\"oneflexi.ear\".\"LinkWEB.war\".PARSE" => "org.jboss.msc.service.StartException in service jboss.deployment.subunit.\"oneflexi.ear\".\"LinkWEB.war\".PARSE: Failed to process phase PARSE of subdeployment \"LinkWEB.war\" of deployment \"oneflexi.ear\""}}}}
^A
I have this trace in jboss in cloudbees:
21:29:57,462 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-3) MSC00001: Failed to start service jboss.deployment.subunit."app.ear"."webapp.war".INSTALL: org.jboss.msc.service.StartException in service jboss.deployment.subunit."app.ear"."webapp.war".INSTALL: Failed to process phase INSTALL of subdeployment "webapp.war" of deployment "app.ear"
at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:121) [jboss-as-server-7.0.2.Final.jar:7.0.2.Final]
at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1824) [jboss-msc-1.0.1.GA.jar:1.0.1.GA]
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1759) [jboss-msc-1.0.1.GA.jar:1.0.1.GA]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [:1.6.0_35]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [:1.6.0_35]
at java.lang.Thread.run(Thread.java:662) [:1.6.0_35]
Caused by: java.lang.RuntimeException: Could not get class configuration for
br.com.mystudies.service.persistence.BackLogDAOBean due to the following errors:
Can't find a deployment unit named mystudies-persistence at subdeployment
"webapp.war" of deployment "app.ear"
but I downloaded the war file in jenkins and deployed in local environment:
18:19:51,614 INFO [org.hibernate.service.jdbc.connections.internal.ConnectionProviderInitiator] (MSC service thread 1-6) HHH00130:Instantiating explicit connection provider: org.hibernate.ejb.connection.InjectedDataSourceConnectionProvider
18:19:52,104 INFO [org.hibernate.dialect.Dialect] (MSC service thread 1-6) HHH00400:Using dialect: org.hibernate.dialect.MySQLDialect
more log...
18:19:54,399 INFO [org.hibernate.tool.hbm2ddl.SchemaUpdate] (MSC service thread 1-6) HHH00232:Schema update complete
18:19:56,371 INFO [org.jboss.web] (MSC service thread 1-3) registering web context: /mystudies-web-1.0.0
18:19:56,432 INFO [org.jboss.as.server.controller] (DeploymentScanner-threads - 1) Deployed "mystudies-web-1.0.0.war"
the deploy hasn't problem.
I searched this problem in google, but no answer.
someone can help me ?
You will want to ensure you can running the same version locally where possible. It is a bit hard to know what it could be from that. Currently only the web profile is supported.