Spoon takes insanely long time to start - pentaho

I am running Spoon - Pentaho EE V 6.1 on my laptop (8 GB RAM) and allocated 4 G to Spoon. Still it takes 3 minutes and 30 seconds to start. I dont have any plugins and my plugins directory is empty. I have also tried this by closing all applications and processes but with no luck. Am I missing anything obvious?
Sep 22, 2017 10:15:13 AM org.apache.cxf.bus.osgi.CXFExtensionBundleListener addExtensions
INFO: Adding the extensions from bundle org.apache.cxf.cxf-rt-javascript (208) [org.apache.cxf.javascript.JavascriptServerListener]
Sep 22, 2017 10:15:13 AM org.pentaho.caching.impl.PentahoCacheManagerFactory$Reg
istrationHandler$1 onSuccess
INFO: New Caching Service registered
2017/09/22 10:15:27 - General - Logging plugin type found with ID: CheckpointLog Table
2017/09/22 10:18:00,201 ERROR [KarafLifecycleListener] Error in Blueprint Watcher org.pentaho.osgi.api.IKarafBlueprintWatcher$BlueprintWatcherException: Unknown error in KarafBlueprintWatcher at org.pentaho.osgi.impl.KarafBlueprintWatcherImpl.waitForBlueprint(KarafBlueprintWatcherImpl.java:103) at org.pentaho.di.osgi.KarafLifecycleListener$2.run(KarafLifecycleListen er.java:161) at java.lang.Thread.run(Unknown Source)
Caused by: org.pentaho.osgi.api.IKarafBlueprintWatcher$BlueprintWatcherException : Timed out waiting for blueprints to load: pdi-dataservice-server-plugin,pentaho-big-data-impl-shim-initializer,pentaho-big-data-impl-shim-hdfs,pentaho-big-data-impl-shim-hbase,pentaho-big-data-impl-shim-mapreduce,pentaho-big-data-impl-shim-pig,pentaho-big-data-impl-shim-oozie,pentaho-big-data-impl-shim-sqoop,pentaho-big-data-impl-vfs-hdfs,pentaho-big-data-kettle-plugins-common-named-cluster-bridge,pentaho-big-data-kettle-plugins-guiTestActionHandlers,pentaho-big-data-kettle-plugins-hdfs,pentaho-big-data-kettle-plugins-hbase,pentaho-big-data-kettle-plugins-mapreduce,pentaho-big-data-kettle-plugins-pig,pentaho-big-data-kettle-plugins-oozie,pentaho-big-data-kettle-plugins-sqoop,pentaho-hadoop-shims-mapr-osgi-jaas,pentaho-big-data-impl-clusterTests,pentaho-big-data-impl-shim-shimTests,pentaho-yarn-api,pentaho-yarn-impl-shim,pentaho-yarn-kettle-plugin,pentaho-metaverse-core,pentaho-metaverse-web,pentaho-requirejs-osgi-manager,pentaho-angular-bundle,common-ui-6.1.0.1,pentaho-marketplace-di at org.pentaho.osgi.impl.KarafBlueprintWatcherImpl.waitForBlueprint(KarafBlueprintWatcherImpl.java:88)
... 2 more
2017/09/22 10:18:42 - General - Starting agile-bi
2017/09/22 10:18:43 - class org.pentaho.agilebi.platform.JettyServer - WebServer.Log.CreateListener localhost:10001

Finally found the answer. Its a known issue and fixed under Version 7
There seems to be a timing issue in the Karaf Blueprint Watcher where sometimes it will ask a bundle for its blueprint file before the bundle is in RESOLVED state. This triggers a (usually) parallel resolve that causes the bundle to be destroyed immediately after creation.
http://jira.pentaho.com/browse/PDI-15488
http://jira.pentaho.com/browse/PDI-14698
http://jira.pentaho.com/browse/PDI-15574

Related

Plain vanilla Apache Ignite cluster fails setting state back to ACTIVE

I've got a plain vanilla install of ignite 2.14, with the binaries downloaded from https://ignite.apache.org/download.cgi (exact link https://dlcdn.apache.org/ignite/2.14.0/apache-ignite-2.14.0-bin.zip). I'm on Windows 10, IGNITE_HOME is not set in the PATH (this is optional), and Ignite's using this java runtime:
OpenJDK Runtime Environment 1.8.0_201-2-redhat-b09 Oracle Corporation
OpenJDK 64-Bit Server VM 25.201-b09
I start an ignite node using the default configuration provided in the downloaded zip apache-ignite-2.14.0-bin.zip :
ignite.bat ..\config\default-config.xml
This starts fine. Following the instructions at https://ignite.apache.org/docs/latest/tools/control-script I can check the state and see I've got a single node cluster in state ACTIVE (the default-config.xml must not have native persistence enabled, so the cluster goes to ACTIVE state automatically).
I can then set the state to INACTIVE like so:
control.bat --set-state INACTIVE
This works fine. However if I set the state to active again like so:
control.bat --set-state ACTIVE
I get the error pasted below and the cluster stays in the INACTIVE state. I first came across this error when using Ignite in embedded server mode, but I can still reproduce it with a fresh out-of-the-box ignite install (not using embedded). I'm surprised that a plain vanilla install just calling a couple of basic commands falls over like this. Any idea what's happening?
This is the error:
C:\temp\apache-ignite-2.14.0-bin\bin>control.bat --set-state ACTIVE
Nov 17, 2022 4:27:17 PM
org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection
INFO: Client TCP connection established: /127.0.0.1:11211 Nov
17, 2022 4:27:17 PM
org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection
close INFO: Client TCP connection closed: /127.0.0.1:11211 Nov 17,
2022 4:27:17 PM org.apache.ignite.internal.client.util.GridClientUtils
shutdownNow WARNING: Runnable tasks outlived thread pool executor
service [owner=GridClientConnectionManager,
tasks=[java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask#6d7b4f4c]]
Control utility [ver. 2.14.0#20220929-sha1:951e8deb] 2022 Copyright(C)
Apache Software Foundation User: info Time: 2022-11-17T16:27:16.344
null suppressed:
Command [SET-STATE] finished with code: 4 Error stack trace: class
org.apache.ignite.internal.client.GridClientException: null
suppressed:
at org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection.handleClientResponse(GridClientNioTcpConnection.java:632)
at org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection.handleResponse(GridClientNioTcpConnection.java:563)
at org.apache.ignite.internal.client.impl.connection.GridClientConnectionManagerAdapter$NioListener.onMessage(GridClientConnectionManagerAdapter.java:691)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:116)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:3734)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:175)
at org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1211)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2508)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2273)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1910)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
at java.lang.Thread.run(Thread.java:748)
Control utility has completed execution at: 2022-11-17T16:27:17.642
Execution time: 1298 ms Press any key to continue . . .
It's a known issue, which is, unfortunately, not fixed yet.
As a workaround, you can execute the command with the autoconfirmation flag --yes, as shown below:
control.bat --set-state ACTIVE --yes

IntelliJ Kotlin build does nothing

I have seen the following issue repeatedly while building multiple Kotlin projects using IntelliJ IDEA Ultimate Build #IU-222.3739.54 (August 2022) and Kotlin plugin 222-1.7.10-release-334-IJ3739.54:
After making a change to existing sources and attempting to run a functional test I get an error such as:
Kotlin: Cannot find a parameter with this name: plan_type
The change I just made was adding the said enumeration to an existing enum class. Checking the logs I see mention of the alleged compilation error that gets displayed in the IDE:
2022-09-06 18:03:51,521 [1573161] INFO - #c.i.c.ComponentStoreImpl - Saving appEditorSettings took 15 ms, FileTypeManager took 14 ms
2022-09-06 18:04:14,831 [1596471] INFO - #c.i.c.i.CompileDriver - COMPILATION STARTED (BUILD PROCESS)
2022-09-06 18:04:15,975 [1597615] INFO - #c.i.c.s.BuildManager - Using preloaded build process to compile /Users/lexluthor/Workspaces/billing-service
2022-09-06 18:04:15,994 [1597634] INFO - #o.j.k.i.s.r.KotlinCompilerReferenceIndexStorage - KCRI storage is closed
2022-09-06 18:04:16,013 [1597653] INFO - #c.i.c.b.CompilerReferenceServiceBase - backward reference index reader is closed
2022-09-06 18:05:33,532 [1675172] INFO - #o.j.k.i.s.r.KotlinCompilerReferenceIndexStorage - KCRI storage is opened: took 50 ms for 1 storages (filling map: 47 ms, flush to storage: 3 ms)
2022-09-06 18:05:33,551 [1675191] INFO - #c.i.c.b.CompilerReferenceServiceBase - backward reference index reader is opened
2022-09-06 18:05:33,886 [1675526] INFO - #c.i.c.i.CompilerUtil - COMPILATION FINISHED (BUILD PROCESS); Errors: 1; warnings: 0 took 79046 ms: 1 min 19sec
2022-09-06 18:05:34,264 [1675904] INFO - #c.i.c.s.BuildManager - BUILDER_PROCESS [stdout]: Build process started. Classpath: /Users/lexluthor/Library/Application Support/JetBrains/Toolbox/apps/IDEA-U/ch-0/222.3739.54/IntelliJ IDEA.app/Contents/plugins/java/lib/jps-launcher.jar
2022-09-06 18:05:46,991 [1688631] INFO - #c.i.c.ComponentStoreImpl - Saving Project(name=billing-service, containerState=COMPONENT_CREATED, componentStore=/Users/lexluthor/Workspaces/billing-service)RunManager took 18 ms
COMPILATION FINISHED (BUILD PROCESS); Errors: 1
There doesn't appear to be any mention of the said error anywhere else in the log output. Here's some additional observations I have made:
The issue only occurs when building a Kotlin project using Gradle and the language version is 1.7.0 or 1.7.10. The issue does not occur if I set the language version to either 1.6.21 or 1.7.20-Beta.
The issue only occurs when using the IntelliJ IDEA builder under Gradle Settings: Preferences | Build, Execution, Deployment | Build Tools | Gradle | Gradle Projects | Build and run | Build and run using. Switching to the Gradle builder in the aforementioned settings allows me to build the project without any errors. Subsequently switching back to the IntelliJ IDEA builder after a successful Gradle build appears to fix the issue in the IDE.
The issue cannot be easily reproduced on a trivial project. I have not been able to reproduce the issue using a demo project generated using Spring Initilzr but I experience this issue frequently while making trivial changes to existing, medium-size, Kotlin projects and the previous two conditions are met.
I have searched the issue in JetBrains YouTrack without success..
I received an answer to this question from JetBrains technical support that I thought of sharing here for the benefit of anyone else who has come across this issue:
From my checking, your issue looks the same as the https://youtrack.jetbrains.com/issue/KT-53168/
There is a bug in the incremental compilation which is fixed in the 1.7.20-Beta.
You could use the 1.7.20-Beta for it currently or use other workarounds (like Gradle builder)

yarn application accepted but not running cloudera despite resource allocation

I am using a Cloudera quickstart VM 5.13.0.0 to run Spark applications in yarn-client mode. I have allocated 10GB and 3 cores to my Cloudera VM. When I submit the application, the application is ACCEPTED but never moves on to RUNNING. When I try to look for logs using yarn logs -applicationId I do not see anything. Its absolutely blank.
I have looked up this issue on:
here
here
here
here
here
here
here
I have practically meddled with all the configs that these links see a problem with. I still do not have an answer to my problem which on the face of it looks like the ones in the links above. Here are the config parameters of my cloudera cluster:
mapreduce.map.memory.mb 128M
mapreduce.reduce.memory.mb 128M
mapreduce.job.heap.memory-mb.ratio 0.8
yarn.nodemanager.resource.memory-mb 1900M
yarn.nodemanager.resource.percentage-physical-cpu-limit 100
yarn.nodemanager.resource.cpu-vcores 1
yarn.scheduler.minimum-allocation-mb 1M
yarn.scheduler.increment-allocation-mb 100M
yarn.scheduler.maximum-allocation-mb 1600M
yarn.scheduler.minimum-allocation-vcores 1
yarn.scheduler.increment-allocation-vcores 1
yarn.scheduler.maximum-allocation-vcores 2
yarn.scheduler.fair.continuous-scheduling-enabled unchecked
mapreduce.am.max-attempts 1
yarn.resourcemanager.am.max-retries, yarn.resourcemanager.am.max-attempts 1
yarn.app.mapreduce.am.resource.mb 1G
yarn.app.mapreduce.am.resource.cpu-vcores 1
ApplicationMaster Java Maximum Heap Size 512M
yarn.resourcemanager.scheduler.class org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
yarn.scheduler.fair.user-as-default-queue unchecked
yarn.scheduler.fair.preemption unchecked
yarn.scheduler.fair.preemption.cluster-utilization-threshold 0.8
yarn.scheduler.fair.sizebasedweight unchecked
Fair Scheduler Allocations (deployed) {"defaultFairSharePreemptionThreshold":null,"defaultFairSharePreemptionTimeout":null,"defaultMinSharePreemptionTimeout":null,"defaultQueueSchedulingPolicy":"drf","queueMaxAMShareDefault":-1.0,"queueMaxAppsDefault":null,"queuePlacementRules":[{"create":true,"name":"specified","queue":null,"rules":null},{"create":null,"name":"nestedUserQueue","queue":null,"rules":[{"create":true,"name":"default","queue":"users","rules":null}]},{"create":null,"name":"default","queue":null,"rules":null}],"queues":[{"aclAdministerApps":null,"aclSubmitApps":null,"allowPreemptionFrom":null,"fairSharePreemptionThreshold":null,"fairSharePreemptionTimeout":null,"minSharePreemptionTimeout":null,"name":"root","queues":[{"aclAdministerApps":null,"aclSubmitApps":null,"allowPreemptionFrom":null,"fairSharePreemptionThreshold":null,"fairSharePreemptionTimeout":null,"minSharePreemptionTimeout":null,"name":"default","queues":[],"schedulablePropertiesList":[{"impalaDefaultQueryMemLimit":null,"impalaDefaultQueryOptions":null,"impalaMaxMemory":null,"impalaMaxQueuedQueries":null,"impalaMaxRunningQueries":null,"impalaQueueTimeout":null,"maxAMShare":-1.0,"maxChildResources":null,"maxResources":null,"maxRunningApps":null,"minResources":null,"scheduleName":"default","weight":1.0}],"schedulingPolicy":"drf","type":null},{"aclAdministerApps":null,"aclSubmitApps":null,"allowPreemptionFrom":null,"fairSharePreemptionThreshold":null,"fairSharePreemptionTimeout":null,"minSharePreemptionTimeout":null,"name":"users","queues":[],"schedulablePropertiesList":[{"impalaDefaultQueryMemLimit":null,"impalaDefaultQueryOptions":null,"impalaMaxMemory":null,"impalaMaxQueuedQueries":null,"impalaMaxRunningQueries":null,"impalaQueueTimeout":null,"maxAMShare":-1.0,"maxChildResources":null,"maxResources":null,"maxRunningApps":null,"minResources":null,"scheduleName":"default","weight":1.0}],"schedulingPolicy":"drf","type":"parent"}],"schedulablePropertiesList":[{"impalaDefaultQueryMemLimit":null,"impalaDefaultQueryOptions":null,"impalaMaxMemory":null,"impalaMaxQueuedQueries":null,"impalaMaxRunningQueries":null,"impalaQueueTimeout":null,"maxAMShare":null,"maxChildResources":null,"maxResources":null,"maxRunningApps":null,"minResources":null,"scheduleName":"default","weight":1.0}],"schedulingPolicy":"drf","type":null}],"userMaxAppsDefault":1,"users":[]}
Here is what the queue description looks like when the application is still in ACCEPTED state:
Likewise, here is the record from the Yarn RM UI (Note that the resources are allocated (memory/cpu) and Running Containers shows 1 container running):
Here is the Application Summary:
Here are the application logs (empty):
And, lastly, here is what the driver sees:
enter code here19/12/26 00:16:42 INFO Client:
client token: N/A
diagnostics: Application application_1577297544619_0002 failed 1 times due to AM Container for appattempt_1577297544619_0002_000001 exited with exitCode: 10
For more detailed output, check application tracking page:http://quickstart.cloudera:8088/proxy/application_1577297544619_0002/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1577297544619_0002_01_000001
Exit code: 10
Stack trace: ExitCodeException exitCode=10:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:604)
at org.apache.hadoop.util.Shell.run(Shell.java:507)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:789)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 10
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: root.default
start time: 1577299469533
final status: FAILED
tracking URL: http://quickstart.cloudera:8088/cluster/app/application_1577297544619_0002
user: shepanch
19/12/26 00:16:42 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:165)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:512)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2511)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
at cloudera.jobs.ClouderaSampleJob$.delayedEndpoint$cloudera$jobs$ClouderaSampleJob$1(ClouderaSampleJob.scala:17)
at cloudera.jobs.ClouderaSampleJob$delayedInit$body.apply(ClouderaSampleJob.scala:6)
at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:76)
at scala.App$$anonfun$main$1.apply(App.scala:76)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
at scala.App$class.main(App.scala:76)
at cloudera.jobs.ClouderaSampleJob$.main(ClouderaSampleJob.scala:6)
at cloudera.jobs.ClouderaSampleJob.main(ClouderaSampleJob.scala)
Is there anything that can be done to solve this issue?
After all the research and apart from the reasons mentioned in the links I have mentioned in the question, I found that this can happen due to various reasons:
when you have different versions of spark in the client (driver) and the cluster. Once you ensure that both bundle the same version of spark, it runs fine.
you might need to mention the property spark.driver.host. Make sure the IP passed in here can be pinged from the guest VM.

Red5 Service Fails to Start on Win 10 - Incorrect Function

I installed Red5. Service installed ok, but when I manually try to start it, I get the following error in my Windows Event Log:
"The Red5 Media Server service terminated with the following service-specific error:
Incorrect function"
In the commons-daemon.log, I see the following:
[2017-05-17 20:36:54] [info] [11044] Starting service...
[2017-05-17 20:36:54] [error] [11044] Failed creating java
[2017-05-17 20:36:54] [error] [11044] ServiceStart returned 1
[2017-05-17 20:36:54] [info] [13816] Run service finished.
[2017-05-17 20:36:54] [info] [13816] Commons Daemon procrun finished
Event ID was 7024. Any help would be appreciated. Thanks!
Had the same issue on Windows Server 2012 R2. Solution was to install the latest release of the Java SE Development Kit. In my case I just installed Java SE DK 8U144: http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-windows-x64.exe
I resolved it by setting up the rtmp:// url correctly. I did not enter in a stream name, as I thought the name parameter above the rtmp text box would do it for me.
It works, but it seems that the chip in my Note 5 just doesn't have the cpu power to stream smoothly, although I will admit I have only spent about 20 minutes trying to tune settings for best performance on my Sprint cell service.
Thank you for your quick reply.

Why, dows 'neo4j console' work, and 'neo4j start' doesn't?

I want to use neo4j. I installed neo4j-community 2.3.2-1 from archlinux AUR and when I ise neo4j consoleeverything works fine. But when I want to start the server in the background with neo4j startthe server won't start with error message:
WARNING: Max 1024 open files allowed, minimum of 40 000 recommended. See the Neo4j manual.
Starting Neo4j Server...process [20559]... waiting for server to be ready... Failed to start within 120 seconds.
Neo4j Server may have failed to start, please check the logs.
The server did not try to start or 120 seconds, more like 2 seconds. In addition I callot find the log-files anywhere.
Google told me to tryout neo4j start-no-wait
When I do this command:
WARNING: Max 1024 open files allowed, minimum of 40 000 recommended. See the Neo4j manual.
Starting Neo4j Server...process [21088]...Started the server in the background, returning...
But nothing is started and the webclient doesn't work like it does when I use neo4j console.
So my basic question is: Why does neo4j consolework and neo4j startdoes not? And how can I start neo4j in the background and stop it again without killing the process?
EDIT:
the console.log says:
2016-01-29 12:51:18.338+0100 INFO Successfully shutdown Neo4j Server
2016-01-29 12:51:18.348+0100 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#77e4c80f' was successfully initialized, but failed to start. Please see attached cause exception. Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#77e4c80f' was successfully initialized, but failed to start. Please see attached cause exception.
org.neo4j.server.ServerStartupException: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#77e4c80f' was successfully initialized, but failed to start. Please see attached cause exception.
at org.neo4j.server.exception.ServerStartupErrors.translateToServerStartupError(ServerStartupErrors.java:67)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:234)
at org.neo4j.server.Bootstrapper.start(Bootstrapper.java:97)
at org.neo4j.server.CommunityBootstrapper.start(CommunityBootstrapper.java:48)
at org.neo4j.server.CommunityBootstrapper.main(CommunityBootstrapper.java:35)
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.server.database.LifecycleManagingDatabase#77e4c80f' was successfully initialized, but failed to start. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:462)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:194)
... 3 more
Caused by: java.lang.RuntimeException: java.io.FileNotFoundException: /var/lib/neo4j/data/graph.db/messages.log (Permission denied)
at org.neo4j.kernel.impl.factory.PlatformModule.createLogService(PlatformModule.java:261)
at org.neo4j.kernel.impl.factory.PlatformModule.<init>(PlatformModule.java:140)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.createPlatform(GraphDatabaseFacadeFactory.java:181)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:124)
at org.neo4j.kernel.impl.factory.CommunityFacadeFactory.newFacade(CommunityFacadeFactory.java:43)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:108)
at org.neo4j.server.CommunityNeoServer$1.newGraphDatabase(CommunityNeoServer.java:66)
at org.neo4j.server.database.LifecycleManagingDatabase.start(LifecycleManagingDatabase.java:95)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:452)
... 5 more
Caused by: java.io.FileNotFoundException: /var/lib/neo4j/data/graph.db/messages.log (Permission denied)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at org.neo4j.io.fs.DefaultFileSystemAbstraction.openAsOutputStream(DefaultFileSystemAbstraction.java:61)
at org.neo4j.io.file.Files.createOrOpenAsOuputStream(Files.java:47)
at org.neo4j.logging.RotatingFileOutputStreamSupplier.openOutputFile(RotatingFileOutputStreamSupplier.java:254)
at org.neo4j.logging.RotatingFileOutputStreamSupplier.<init>(RotatingFileOutputStreamSupplier.java:138)
at org.neo4j.logging.RotatingFileOutputStreamSupplier.<init>(RotatingFileOutputStreamSupplier.java:122)
at org.neo4j.kernel.impl.logging.StoreLogService.<init>(StoreLogService.java:164)
at org.neo4j.kernel.impl.logging.StoreLogService.<init>(StoreLogService.java:43)
at org.neo4j.kernel.impl.logging.StoreLogService$Builder.toFile(StoreLogService.java:110)
at org.neo4j.kernel.impl.logging.StoreLogService$Builder.inStoreDirectory(StoreLogService.java:105)
at org.neo4j.kernel.impl.factory.PlatformModule.createLogService(PlatformModule.java:252)
... 13 more