I am setting up apache storm in distributed mode. My Zookeeper is working fine. I am unable to start apache storm nimbus even.
I am following: http://chennaihug.org/knowledgebase/storm-multinode-installation/
Zookeeper config file:
tickTime=2000
dataDir=/data/zookeeper
clientPort=2181
initLimit=5
syncLimit=2
server.1=scarlet:2888:3888
server.2=plum:2888:3888
server.3=peacock:2888:3888
server.4=green:2888:3888
server.5=mustard:2888:3888
server.6=white:2888:3888
Storm.yaml:
storm.zookeeper.servers:
- "scarlet"
- "plum"
- "green"
- "white"
- "mustard"
- "peacock"
nimbus.host: "scarlet"
storm.zookeeper.port: 2181
java.library.path: "/usr/lib/jvm/java-8-oracle"
storm.local.dir: "/app/storm"
I started zookeeper using:
/opt/zookeeper-3.4.10/bin/zkCli.sh -server scarlet:2181,plum:2181,peacock:2181,green:2181,mustard:2181,white:2181
Checked the status of zookeeper. 5 followers and 1 leader. All working fine.
I start apache storm using:
bin/storm nimbus
where it gives the error:
Unrecognized option: -client
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Unrecognized option: -client
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Unrecognized option: -client
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Unrecognized option: -client
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Running: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -server -Ddaemon.name=nimbus -Dstorm.options= -Dstorm.home=/opt/apache-storm-1.2.2 -Dstorm.log.dir= -Djava.library.path= -Dstorm.conf.file= -cp /opt/apache-storm-1.2.2/*:/opt/apache-storm-1.2.2/lib/*:/opt/apache-storm-1.2.2/extlib/*:/opt/apache-storm-1.2.2/extlib-daemon/*:/opt/apache-storm-1.2.2/conf -Dlogfile.name=nimbus.log -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector -Dlog4j.configurationFile=/opt/apache-storm-1.2.2/cluster.xml org.apache.storm.daemon.nimbus
2019-01-14 17:20:21,591 main ERROR Unable to create file /nimbus.log java.io.IOException: Permission denied
Turns out the problem was java installation. Purged complete openjdk and re-installed it.
apt purge default-jdk default-jdk-headless openjdk-8-jdk openjdk-8-jdk-headless openjdk-8-jre openjdk-8-jre-headless
apt install openjdk-8-jdk
my question about the installation of openshift environment using minishift on virtual box.
minishift v1.4.1+0f658ea
VirtualBox-5.1.26-117224-Win.exe
The installation is incomplete due to the folowing error:-
C:\Users\xyzdgs\Desktop\Openshift_n_Docker\OpenShift Developer>minishift.exe start --vm-driver=C:\Program Files\Oracle\VirtualBox\VBoxSVC.exe
-- Starting local OpenShift cluster using 'C:\Program' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 2 GB
vCPUs : 2
Disk size: 20 GB
Downloading ISO 'https://github.com/minishift/minishift-b2d-iso/releases/download/v1.1.0/minishift-b2d.iso'
40.00 MiB / 40.00 MiB [===========================================] 100.00% 0s
-- Starting Minishift VM ... | Unsupported driver: C:\Program
So, to solve this I simply put the directory where all drivers are located in the installation and run it again
C:\Users\xyzdgs\Desktop\Openshift_n_Docker\OpenShift Developer>minishift.exe start --vm-driver=C:\Program Files\Oracle\VirtualBox\
-- Starting local OpenShift cluster using 'C:\Program' hypervisor ...
-- Starting Minishift VM ... / FAIL E0825 11:20:43.830638 1260 start.go:342]
Error starting the VM: Error getting the state for host: machine does not exist.
Retrying.
| FAIL E0825 11:20:44.297638 1260 start.go:342] Error starting the VM: Error getting the state for host: machine does not exist. Retrying.
/ FAIL E0825 11:20:44.612638 1260 start.go:342] Error starting the VM: Error getting the state for host: . Retrying.
Error starting the VM: Error getting the state for host: machine does not exist
Error getting the state for host: machine does not exist
Error getting the state for host: machine does not exist
It says "machine does not exist", shouldn't the machine be created by minishift itself (see te procedure here: blog.novatec-gmbh.de/getting-started-minishift-openshift-origin-one-vm/)
Not sure what is causing this. Please guide.
The main issue with the command -- and what it's really complaining about -- is that you're passing in an unquoted path:
minishift.exe start --vm-driver=C:\Program Files\Oracle\VirtualBox\VBoxSVC.exe
should have been
minishift.exe start --vm-driver="C:\Program Files\Oracle\VirtualBox\VBoxSVC.exe"
But according to the MiniShift documentation, you should update to VirtualBox 5.1.12+ (which you have) and use the following syntax:
minishift.exe start --vm-driver=virtualbox
7 months after this question was asked and using VirtualBox v4.3.30, I can get MiniShift v1.15.1 running with the last command, but can't get it to accept your previous syntax or even produce the same error from it.
UPDATED error message
I am getting a BOOT FAILED error every time I try to start the rabbitmq server. Does anybody know how I can fix this? I have attached the error message. I have tried a few different things including uninstalling and reinstalling it and am now getting a new error message, but am at a loss for what to try next. Any suggestions are much appreciated! Thank you!!
BOOT FAILED
===========
Error description:
{error,
{schema_integrity_check_failed,
[{table_missing,rabbit_exchange_serial},
{table_missing,rabbit_runtime_parameters},
{table_missing,rabbit_durable_queue},
{table_missing,rabbit_queue},
{table_missing,gm_group},
{table_missing,mirrored_sup_childspec}]}}
Log files (may contain more information):
/usr/local/var/log/rabbitmq/rabbit#localhost.log
/usr/local/var/log/rabbitmq/rabbit#localhost-sasl.log
Stack trace:
[{rabbit_mnesia,ensure_schema_integrity,0,
[{file,"src/rabbit_mnesia.erl"},{line,519}]},
{rabbit_mnesia,init_db,3,[{file,"src/rabbit_mnesia.erl"},{line,450}]},
{rabbit_mnesia,init_db_and_upgrade,3,
[{file,"src/rabbit_mnesia.erl"},{line,458}]},
{rabbit_mnesia,init,0,[{file,"src/rabbit_mnesia.erl"},{line,99}]},
{rabbit,'-run_boot_step/1-lc$^1/1-1-',1,
[{file,"src/rabbit.erl"},{line,488}]},
{rabbit,run_boot_step,1,[{file,"src/rabbit.erl"},{line,487}]},
{rabbit,'-start/2-lc$^0/1-0-',1,[{file,"src/rabbit.erl"},{line,453}]},
{rabbit,start,2,[{file,"src/rabbit.erl"},{line,453}]}]
BOOT FAILED
===========
Error description:
{could_not_start,rabbit,
{bad_return,
{{rabbit,start,[normal,[]]},
{'EXIT',
{rabbit,failure_during_boot,
{error,
{schema_integrity_check_failed,
[{table_missing,rabbit_exchange_serial},
{table_missing,rabbit_runtime_parameters},
{table_missing,rabbit_durable_queue},
{table_missing,rabbit_queue},
{table_missing,gm_group},
{table_missing,mirrored_sup_childspec}]}}}}}}}
Log files (may contain more information):
/usr/local/var/log/rabbitmq/rabbit#localhost.log
/usr/local/var/log/rabbitmq/rabbit#localhost-sasl.log
{"init terminating in do_boot",{rabbit,failure_during_boot,{could_not_start,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot,{error,{schema_integrity_check_failed,[{table_missing,rabbit_exchange_serial},{table_missing,rabbit_runtime_parameters},{table_missing,rabbit_durable_queue},{table_missing,rabbit_queue},{table_missing,gm_group},{table_missing,mirrored_sup_childspec}]}}}}}}}}}
Crash dump was written to: erl_crash.dump
init terminating in do_boot ()
I don't know how rabbitmq works, but the error message looks clear: it tries to delete the directory /usr/local/var/lib/rabbitmq/mnesia/rabbit#localhost-plugins-expand, and fails because the process has nt the access right to delete the file /usr/local/var/lib/rabbitmq/mnesia/rabbit#localhost-plugins-expand/amqp_client-3.1.3/ebin/amqp_auth_mechanisms.beam.
Take a look at who is the owner of this file and directory, what are the access right to them.
This occurred for me during a rabbitmq upgrade with brew.
It was easier for me to just remove the directory all together and install from scratch.
sudo rm -rf /usr/local/var/rabbitmq/
brew uninstall rabbitmq;
brew install rabbitmq
rabbitmq-server
Got this to work. Just delete the database directory and restart server. Note if you had brew installed, database might still be outside Cellar directory. So need to manually delete directory and restart.
i have problem with running rabbitmq-server on CentOS 6.
Im getting such message while trying to start rabbitmq-server:
starting networking ...BOOT ERROR: FAILED
Reason: {badmatch,
{error,
{shutdown,
{child,undefined,'rabbit_tcp_listener_sup_:::5672',
{tcp_listener_sup,start_link,
[{0,0,0,0,0,0,0,0},
5672,
[inet6,binary,
{packet,raw},
{reuseaddr,true},
{backlog,128},
{nodelay,true},
{exit_on_close,false}],
{rabbit_networking,tcp_listener_started,[amqp]},
{rabbit_networking,tcp_listener_stopped,[amqp]},
{rabbit_networking,start_client,[]},
"TCP Listener"]},
transient,infinity,supervisor,
[tcp_listener_sup]}}}}
Stacktrace: [{rabbit_networking,start_listener0,4},
{rabbit_networking,'-start_listener/4-lc$^0/1-0-',4},
{rabbit_networking,start_listener,4},
{rabbit_networking,'-boot_tcp/0-lc$^0/1-0-',1},
{rabbit_networking,boot_tcp,0},
{rabbit_networking,boot,0},
{rabbit,'-run_boot_step/1-lc$^1/1-1-',1},
{rabbit,run_boot_step,1}]
Erlang has closed
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot}}}}}"}
Crash dump was written to: erl_crash.dump
Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot}}}}})
In rabbitmq-env.conf i have:
NODENAME=main
CONFIG_FILE=/etc/rabbitmq/
Also in rabbitmq.config i have:
[
{rabbit, [{tcp_listeners, [{"0.0.0.0", 5672}]}]}
].
Hmm? Does anyone know where is the problem?
Thanks in advance