How to fix error java virtual machine not found error in apache storm? - apache

I am setting up apache storm in distributed mode. My Zookeeper is working fine. I am unable to start apache storm nimbus even.
I am following: http://chennaihug.org/knowledgebase/storm-multinode-installation/
Zookeeper config file:
tickTime=2000
dataDir=/data/zookeeper
clientPort=2181
initLimit=5
syncLimit=2
server.1=scarlet:2888:3888
server.2=plum:2888:3888
server.3=peacock:2888:3888
server.4=green:2888:3888
server.5=mustard:2888:3888
server.6=white:2888:3888
Storm.yaml:
storm.zookeeper.servers:
- "scarlet"
- "plum"
- "green"
- "white"
- "mustard"
- "peacock"
nimbus.host: "scarlet"
storm.zookeeper.port: 2181
java.library.path: "/usr/lib/jvm/java-8-oracle"
storm.local.dir: "/app/storm"
I started zookeeper using:
/opt/zookeeper-3.4.10/bin/zkCli.sh -server scarlet:2181,plum:2181,peacock:2181,green:2181,mustard:2181,white:2181
Checked the status of zookeeper. 5 followers and 1 leader. All working fine.
I start apache storm using:
bin/storm nimbus
where it gives the error:
Unrecognized option: -client
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Unrecognized option: -client
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Unrecognized option: -client
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Unrecognized option: -client
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Running: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -server -Ddaemon.name=nimbus -Dstorm.options= -Dstorm.home=/opt/apache-storm-1.2.2 -Dstorm.log.dir= -Djava.library.path= -Dstorm.conf.file= -cp /opt/apache-storm-1.2.2/*:/opt/apache-storm-1.2.2/lib/*:/opt/apache-storm-1.2.2/extlib/*:/opt/apache-storm-1.2.2/extlib-daemon/*:/opt/apache-storm-1.2.2/conf -Dlogfile.name=nimbus.log -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector -Dlog4j.configurationFile=/opt/apache-storm-1.2.2/cluster.xml org.apache.storm.daemon.nimbus
2019-01-14 17:20:21,591 main ERROR Unable to create file /nimbus.log java.io.IOException: Permission denied

Turns out the problem was java installation. Purged complete openjdk and re-installed it.
apt purge default-jdk default-jdk-headless openjdk-8-jdk openjdk-8-jdk-headless openjdk-8-jre openjdk-8-jre-headless
apt install openjdk-8-jdk

Related

Karate Performance testing : Getting build failure with error "Unrecognized VM option 'UseBiasedLocking'

I am trying to reuse Karate tests for performance testing with Gatling and Scala.
I have configured everything as described in documentation. But then when I run the mvn command I am getting error "Unrecognized VM option 'UseBiasedLocking' Error: Could not create the Java Virtual Machine."
mvn command used to run tests: mvn test-compile gatling:test
Tried with looking at Env path variables and running the mvn command with different options. But still getting same error
Failed to execute goal io.gatling:gatling-maven-plugin:4.1.5:test (default-cli) on project PerformanceTesting: Gatling failed.: Process exited with an error: 1 (Exit value: 1)
Below is the POM file
Scala File
The issue is resolved after upgrading the gatling.plugin.version to 4.2.7 as my Java version 19 is not supporting old gatling plugin.
Below article helped to resolve
Unrecognized VM option 'UseBiasedLocking' Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit

Error: Command failed: gradlew.bat app:installDebug -PreactNativeDevServerPort=8082, App not installing in react-native, gradle build failed

FAILURE: Build failed with an exception.
What went wrong: Unable to start the daemon process. This problem
might be caused by incorrect configuration of the daemon. For
example, an unrecognized jvm option is used. Please refer to the User
Manual chapter on the daemon at
https://docs.gradle.org/6.9/userguide/gradle_daemon.html Process
command line: C:\Program Files\Java\jdk-18.0.1.1\bin\java.exe
-XX:MaxPermSize=512m -XX:+HeapDumpOnOutOfMemoryError --add-opens java.base/java.util=ALL-UNNAMED --add-opens
java.base/java.lang=ALL-UNNAMED --add-opens
java.base/java.lang.invoke=ALL-UNNAMED --add-opens
java.prefs/java.util.prefs=ALL-UNNAMED -Xmx2048m
-Dfile.encoding=UTF-8 -Duser.country=IN -Duser.language=en -Duser.variant -cp C:\Users\Nagaraj.gradle\wrapper\dists\gradle-6.9-all\dooywd8nv05k16orzxge2b1bs\gradle-6.9\lib\gradle-launcher-6.9.jar org.gradle.launcher.daemon.bootstrap.GradleDaemon 6.9 Please read the
following process output to find out more:
Unrecognized VM option 'MaxPermSize=512m' Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit.

can't start rabbitmq-server after installation on windows

I try to run this 'rabbitmq-server' command in my cmd but that give me this error ...
Configuring logger redirection
13:44:01.865 [warning] Using RABBITMQ_ADVANCED_CONFIG_FILE: c:/Users/saikat/AppData/Roaming/RabbitMQ/advanced.config
13:44:02.838 [error]
13:44:02.838 [error] BOOT FAILED
BOOT FAILED
13:44:02.838 [error] ===========
===========
13:44:02.838 [error] ERROR: distribution port 25672 in use by another node: rabbit#DESKTOP-1I7H1RC
ERROR: distribution port 25672 in use by another node: rabbit#DESKTOP-1I7H1RC
13:44:02.838 [error]
13:44:03.839 [error] Supervisor rabbit_prelaunch_sup had child prelaunch started with rabbit_prelaunch:run_prelaunch_first_phase() at undefined exit with reason {dist_port_already_used,25672,"rabbit","DESKTOP-1I7H1RC"} in context start_error
13:44:03.840 [error] CRASH REPORT Process <0.152.0> with 0 neighbours exited with reason: {{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","DESKTOP-1I7H1RC"}}},{rabbit_prelaunch_app,start,[normal,[]]}} in application_master:init/4 line 138
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,\"rabbit\",\"DESKTOP-1I7H1RC\"}}},{rabbit_prelaunch_app,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","DESKTOP-1I7H1RC"}}},{r
Crash dump is being written to erl_crash.dump...done
No doubt you have already resolved this, but for anyone else this may help I had the same issue on Windows and managed to resolve it by doing the followed.
Open powershell as admin in *\rabbitmq_server-3.8.9\sbin*
Stop the service by running: .\rabbitmq-service.bat stop
Start the service by running: .\rabbitmq-server.bat
If you are on windows then go to Services
Search RabbitMq and Right click
Stop service
Open cmd as administrator
run cd C:\Program Files\RabbitMQ Server\rabbitmq_server-3.8.17\sbin
and then run rabbitmq-server
For me what I did was killing the process of erl from the task manager and then running the command:
rabbitmq-server.bat

Jenkins throws error instantiating firefox, works fine with Maven

1515175026602 mozrunner::runner INFO Running command: "C:\\Program Files\\Mozilla Firefox\\firefox.exe" "-marionette" "-profile" "C:\\Windows\\TEMP\\rust_mozprofile.MX9tmRHWAJFL"
1515175027227 Marionette INFO Enabled via --marionette
###!!! [Parent][MessageChannel] Error: (msgtype=0x240057,name=PContent::Msg_SetPluginList) Channel error: cannot send/recv
###!!! [Parent][MessageChannel] Error: (msgtype=0x24004C,name=PContent::Msg_GMPsChanged) Channel error: cannot send/recv
###!!! [Parent][MessageChannel] Error: (msgtype=0x15008F,name=PBrowser::Msg_UpdateNativeWindowHandle) Channel error: cannot send/recv
###!!! [Parent][MessageChannel] Error: (msgtype=0x150083,name=PBrowser::Msg_Destroy) Channel error: cannot send/recv
A content process crashed and MOZ_CRASHREPORTER_SHUTDOWN is set, shutting down
The issue got fixed by installing Jenkins using jenkins.war in the command prompt. Earlier I have installed using .exe file as like usual windows softwares.
.exe instals it as a windows service and .war as a java executable
Run "java -jar jenkins.war" on cmd, configure and run the build, would resolve the problem
this issue may be with unsupported security preferences set up in code, I faced this issue in mac while I was setting up this preference in FF v62. and gecko 0.24
profile.setPreference("security.sandbox.content.level", 5);

Apache hadoop Installation on Windows 10

While setting up a single node cluster without Cygwin on windows 10,I followed the specific document- Link for Hadoop installation in windows 10
I am facing the below error while starting the hdfs using D:\hadoop-2.6.2.tar\hadoop-2.6.2\hadoop-2.6.2\sbin>start-dfs.cmd
Error message stack trace:
17/01/12 12:25:42 FATAL datanode.DataNode: Exception in secureMain java.lang.RuntimeException: Error while running command to get file permissions : ExitCodeException exitCode=-1073741515:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:808)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:791)
at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:582)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:557)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2299)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2341)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2323)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2215)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2262)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2438)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2462)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:620)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:557)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2299)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2341)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2323)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2215)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2262)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2438)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2462) 17/01/12 12:25:42 INFO util.ExitUtil: Exiting with status 1
Also this error message about starting namenode:
17/01/12 12:25:43 FATAL namenode.NameNode: Failed to start namenode.
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:557)
at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:490)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:309)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1022)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:741)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:538)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:597)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:764)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:748)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1441)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1507)
17/01/12 12:25:43 INFO util.ExitUtil: Exiting with status 1
[]Problem analysis ] /data directory permissions is not enough, the NameNode cannot be started.
[Solution]
(1) in the root, the operation of the/data/directory permissions assigned to hadoop users;
(2) empty /data directory file;
(3) to reformat the NameNode, restart the hadoop cluster.