RabbitMQ management plugin with local cluster - rabbitmq

Is there any reason that the rabbitmq-management plugin wouldn't work when I'm using 'rabbitmq-multi' to spin up a cluster of nodes on my desktop? Or, more precisely, that the management plugin would cause that spinup to fail?
I get Error: {node_start_failed,normal} when rabbitmq-multi starts rabbit_1#localhost
The first node, rabbit#localhost seems to start okay though.
If I take out the management plugins, all the nodes start up (and then cluster) fine. I think I'm using a recent enough Erlang version (5.8/OTP R14A according to the README in my erl5.8.2 folder). I'm using all the plugins that are listed as required on the plugins page, including mochiweb, webmachine, amqp_client, rabbitmq-mochiweb, rabbitmq-management-agent, and rabbitmq-management. Those plugins, and only those plugins.

The problem is that rabbitmq-multi only assigns sequential ports for AMQP, not HTTP (or STOMP or AMQPS or anything else the broker may open). Therefore each node tries to listen on the same port for the management plugin and only the first succeeds. rabbitmq-multi will be going away in the next release; this is one reason why.
I think you'll want to start the nodes without using rabbitmq-multi, just with multiple invocations of rabbitmq-server, using environment variables to configure each node differently. I use a script like:
start-node.sh:
#!/bin/sh
RABBITMQ_NODE_PORT=$1 RABBITMQ_NODENAME=$2 \
RABBITMQ_MNESIA_DIR=/tmp/rabbitmq-$2-mnesia \
RABBITMQ_PLUGINS_EXPAND_DIR=/tmp/rabbitmq-$2-plugins-scratch \
RABBITMQ_LOG_BASE=/tmp \
RABBITMQ_SERVER_START_ARGS="-rabbit_mochiweb port 5$1" \
/path/to/rabbitmq-server -detached
and then invoke it as
start-node.sh 5672 rabbit
start-node.sh 5673 hare

Related

RabbitMQ SSL Clustering on Windows Breaks rabbitmqctl

I'm trying to follow the steps in the RabbitMQ docs here to get clustering with SSL working on Windows. I'm noticing though that the "rabbitmqctl status" command starts failing after the environment variables defined in those steps are set. I'm getting the following error when executing "rabbitmqctl status":
Error: unable to connect to node 'rabbit#server1': nodedown
I've already configured RabbitMQ to use TLS 1.2 and have verified that it's working. I've ensured that my Erlang 18 cookie is the same in the user directory C:\users\me and C:\Windows on the machine, but the error persists, and is stopping other servers from clustering with it. The docs say that the Windows SSL Cluster setup is "Coming soon"... Here are the steps I've taken so far on server1. I think that Erlang wants forward slashes in the paths - this matches the rabbit.config SSL settings.
Combined the contents of my server\cert.pem and server\key.pem into rabbit.pem via the command "type server\cert.pem server\key.pem > server\rabbit.pem"
Created environment variable ERL_SSL_PATH and set to: "C:/Program
Files/erl7.0/lib/ssl-7.0/ebin"
Created environment variable RABBITMQ_CTL_ERL_ARGS and set to: -pa "%ERL_SSL_PATH%" -proto_dist inet_tls -ssl_dist_opt server_certfile C:/OpenSSL-Win64/server/rabbit.pem -ssl_dist_opt server_secure_renegotiate true client_secure_renegotiate true
Created environment variable RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS and set to same value as RABBITMQ_CTL_ERL_ARGS
Copied the erlang cookie at C:\Windows.erlang.cookie to my local user profile directory.
Restarted rabbit using rabbitmq-service start
At this point, on server1, "rabbitmqctl status" no longer works. Attempts to try to join server2 to server1 result in a "node down" error.
Edit 1: I can't get the initial step in the docs working to ask Erlang to report its SSL directory on Windows in order to set ERL_SSL_PATH correctly. Erlang is installed at C:\Program Files\erl7.0 on my server.
Edit 2: Using werl.exe (at C:\Program Files\erl7.0\bin\werl.exe), I was able to issue a command "Foo=io:format(code:lib_dir(ssl, ebin))." and it reported the path as: c:/Program Files/erl7.0/lib/ssl-7.0/ebin. However, this doesn't seem to be the cause of the this issue since that's already what I was using.
Thanks,
Andy
For environment changes to take effect on Windows, the service must be
re-installed. It is not sufficient to restart the service. This can be
done using the installer or on the command line with administrator
permissions
(source)
This will do:
rabbitmq-service.bat stop
rabbitmq-service.bat remove
rabbitmq-service.bat install
rabbitmq-service.bat start
Also, if while the node you're working on is down, the other cluster nodes were running, their state might be assumed to have gone out of sync. In that case, the node might fail to start up and you might need to:
rabbitmqctl force_boot
Check the logs to confirm. (at %RABBIT_BASE%\log\rabbit#server.log)
Late answer but, hopefully this could help a searcher...

How to submit code to a remote Spark cluster from IntelliJ IDEA

I have two clusters, one in local virtual machine another in remote cloud. Both clusters in Standalone mode.
My Environment:
Scala: 2.10.4
Spark: 1.5.1
JDK: 1.8.40
OS: CentOS Linux release 7.1.1503 (Core)
The local cluster:
Spark Master: spark://local1:7077
The remote cluster:
Spark Master: spark://remote1:7077
I want to finish this:
Write codes(just simple word-count) in IntelliJ IDEA locally(on my laptp), and set the Spark Master URL to spark://local1:7077 and spark://remote1:7077, then run my codes in IntelliJ IDEA. That is, I don't want to use spark-submit to submit a job.
But I got some problem:
When I use the local cluster, everything goes well. Run codes in IntelliJ IDEA or use spark-submit can submit job to cluster and can finish the job.
But When I use the remote cluster, I got a warning log:
TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
It is sufficient resources not sufficient memory!
And this log keep printing, no further actions. Both spark-submit and run codes in IntelliJ IDEA result the same.
I want to know:
Is it possible to submit codes from IntelliJ IDEA to remote cluster?
If it's OK, does it need configuration?
What are the possible reasons that can cause my problem?
How can I handle this problem?
Thanks a lot!
Update
There is a similar question here, but I think my scene is different. When I run my codes in IntelliJ IDEA, and set Spark Master to local virtual machine cluster, it works. But I got Initial job has not accepted any resources;... warning instead.
I want to know whether the security policy or fireworks can cause this?
Submitting code programatically (e.g. via SparkSubmit) is quite tricky. At the least there is a variety of environment settings and considerations -handled by the spark-submit script - that are quite difficult to replicate within a scala program. I am still uncertain of how to achieve it: and there have been a number of long running threads within the spark developer community on the topic.
My answer here is about a portion of your post: specifically the
TaskSchedulerImpl: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have
sufficient resources
The reason is typically there were a mismatch on the requested memory and/or number of cores from your job versus what were available on the cluster. Possibly when submitting from IJ the
$SPARK_HOME/conf/spark-defaults.conf
were not properly matching the parameters required for your task on the existing cluster. You may need to update:
spark.driver.memory 4g
spark.executor.memory 8g
spark.executor.cores 8
You can check the spark ui on port 8080 to verify that the parameters you requested are actually available on the cluster.

elasticsearch-mesos not getting listed under frameworks of mesosUI

Iam trying to run elasticsearch-mesos on mesos.My machine is running ubuntu 14.04. I have running mesos cluster installed with mesosphere packages by following these instructions. When I run test frameworks it gets lister under frameworks of mesosUI but for elasticsearch-mesos its not getting listed under mesos webUI. I want to run elasticsearch-mesos on top of mesos. I followed instructions given here. When I run ./elasticsearch-mesos I am getting a message in terminal
I0108 17:24:01.898540 23861 group.cpp:385] Trying to create path '/mesos' in ZooKeeper
I tried running ./elasticsearch-mesos on both mesos masters and slaves.
The last few lines of terminal output is given below
2015-01-08 17:24:01,881:23844(0x7f175bfff700):ZOO_INFO#zookeeper_init#786: Initiating
client connection, host=localhost:2181 sessionTimeout=10000 watcher=0x7f1762a3e6a0
sessionId=0 sessionPasswd=<null> context=0x7f1710002530 flags=0
I0108 17:24:01.881392 23858 sched.cpp:137] Version: 0.21.1
2015-01-08 17:24:01,881:23844(0x7f172b7fe700):ZOO_INFO#check_events#1703: initiated
connection to server [127.0.0.1:2181]
2015-01-08 17:24:01,897:23844(0x7f172b7fe700):ZOO_INFO#check_events#1750: session
establishment complete on server [127.0.0.1:2181], sessionId=0x14ac7c469270006,
negotiated timeout=10000
I0108 17:24:01.898455 23861 group.cpp:313] Group process (group(1)#127.0.1.1:38668)
connected to ZooKeeper
I0108 17:24:01.898509 23861 group.cpp:790] Syncing group operations: queue size (joins,
cancels, datas) = (0, 0, 0)
I0108 17:24:01.898540 23861 group.cpp:385] Trying to create path '/mesos' in ZooKeeper
According to the README at https://github.com/mesosphere/elasticsearch-mesos,
you may need to modify mesos.master.url to point to the same ZK url that the Mesos master is using (maybe not localhost). If you're using a single-master Mesos cluster, you can skip the ZK url and point this parameter directly to the Mesos master.
Please also note that the elasticsearch framework is a bit outdated, so use with caution

Mule ESB: Is it possible to start 2 instances of the Mule ESB

I created two sepaerate directories in which I installed the Standalone Mule ESB server:
/ee/mmc-distribution-mule-console-bundle-3.5.2-HF1
/ee2/mmc-distribution-mule-console-bundle-3.5.2-HF1
I start up the first server, and below is the status:
[root#x240perf2 mmc-distribution-mule-console-bundle-3.5.2-HF1]# ./status.sh
MMC is running as PID=1998.
Mule Enterprise Edition is running as PID=2619.
Then I try to start the second instance:
[root#x240perf2 mmc-distribution-mule-console-bundle-3.5.2-HF1]# ./startup.sh
Port 8585 is in use, please make it available and try again.
So apparently the port 8585 is being used by the original instnace
So I stop the first instnace, and start the second istance, which comes up successfully, as follows:
./startup.sh
Please enter the desired port for Mule [Default 7777]:
Starting MMC, please wait...
class com.sun.jersey.multipart.impl.MultiPartConfigProvider
class com.sun.jersey.multipart.impl.MultiPartReader
class com.sun.jersey.multipart.impl.MultiPartWriter
[11-13 16:49:19] WARN HttpSessionSecurityContextRepository [http-bio-8585-exec-1]: Failed to create a session, as response has been committed. Unable to store SecurityContext.
[11-13 16:49:32] WARN HttpMethodBase [http-bio-8585-exec-12]: Going to buffer response body of large or unknown size. Using getResponseBodyAsStream instead is recommended.
[11-13 16:49:38] WARN HttpSessionSecurityContextRepository [http-bio-8585-exec-12]: Failed to create a session, as response has been committed. Unable to store SecurityContext.
Nov 13, 2014 4:49:50 PM org.apache.catalina.core.StandardServer await
INFO: A valid shutdown command was received via the shutdown port. Stopping the Server instance.
Nov 13, 2014 4:49:50 PM org.apache.coyote.AbstractProtocol pause
INFO: Pausing ProtocolHandler ["http-bio-8585"]
But notice it seems to be using 8585 for tomcat (of which I know little about, except it some sort of app server, never used it)
I examined this site:
http://www.mulesoft.org/documentation/display/33X/Running+Multiple+Mule+Instances
but it does nto discuss the issue., and the page it points do does not seem current. Did I misunderstand something
Is it possible to run two separate instances of Mule ESB at the same time
and if so, how ? (how would I change the port its using, what file should I modify)
Thanks
Edit: my second post in response to answer:
(BTW: I am using Mule ESB standalone Enterprise Edition 3.5.2)
To make sure I did not have any apps that were running
on port 8585, I shutdown my original instance, and created two new instances, and made sure no apps were deployed to either instance.
I brought up the first instance without issue, but the second instance I brought up still gives me the port 8585 in use error (from startup.sh)
This site says that the MMC default port is 7777, but the tomcat default port on which it runs is 8585
http://www.mulesoft.org/documentation/display/current/Setting+Up+MMC-Mule+ESB+Communications
I used the following command to find all files within my second instance of por t 8585
find . -type f |xargs grep "8585
Other than log files I got two hits
startup.sh
and
/mmc-3.5.2-HF1/apache-tomcat-7.0.52/conf/server.xml
I did NOT find in either instance the $MULE_HOME/apps/mmc/mule-config.xml (probably because I have no apps deployed)
In the server.xml, the MMC apparently uses tomcat to
handle the MMC applicaiton, and server.xml contains
the following:
<Connector port="8585" protocol="HTTP/1.1"
So I guess I could change 8585 to 8586 at this point, but ...
The startup.sh has serveral (about 9 or 10) hardcode dreferences to 8585 to check that the MMC is running and take action if it is or is not running
So do I actually have to change the entire startup.sh to replace 8585 with 8586 i the second instance as well as change the server.xml port 8585 reference ?
Thanks
You can run as many instances as you want, as long they don't use the same ports. Looks like you are deploying something in port 8585, so in the second instance you have to select a different port.
Is that port being used in any application that you developed and deployed in the Mule runtime?
Also, if you are using the Mule runtime with the MMC agent activated, you also have to change the port for the agent in the second instance. I think you can do that in the /conf/wrapper.conf or by passing to the startup script the following parameter:
-Dmule.mmc.bind.port=7778
(or any port that is free).
You can run as many as you want.
In MMC we can able to deploy and run many applications each applications has its own instance

Deploy & Debug remote Jetty with IntelliJ 12

I've been hacking and googling for a while now, and I've found several statck overflow threads that seemed like they were written for older versions of intellij, with various application servers. Usually they tell you to enter
java -Xdebug -Xrunjdwp:transport=dt_socket,address=51887,suspend=n,server=y
One answer suggests using something like
-agentlib:jdwp:transport=dt_socket,address=51887,suspend=n,server=y
But then I get this:
Error occurred during initialization of VM
Could not find agent library: libjdwp:transport.jnilib (searched /Library/Java/JavaVirtualMachines/1.6.0_37-b06-434.jdk/Contents/Libraries:/System/Library/Java/Extensions:/Library/Java/Extensions:.)
Then after one or the other of the above they tell you something like "Edit Configurations> jetty > remote and enter localhost, 51887" (the port number varies)
However in 12, the page you land on after you select remote has a plethora of options, and is asking for JNDI ports, not jdwp ports on another tab it actually suggests the jdwp parameters above.
Researching the JNDI port bit, generally yields instructions to add args like this to your command line...
-Dcom.sun.management.jmxremote= \
-Dcom.sun.management.jmxremote.port=1099 \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.authenticate=false\
I've done that too and I can see port 1099 held by java (using lsof) and I can telnet to 1099, so I know the JVM is listening. (We'll try not to worry about the fact that that appears to say, open up a port by which anyone install arbitrary java code over the network to your computer without a password)
However, in Intellij whenever I try to deploy and debug it gives me the following message:
I can see java RMI communications over 1099 when I snoop port 1099 with wireshark (but they are illegible). Evidently, the communications are not satisfactory for Intellij, so I'm wondering if there's something I need to do to Jetty to get it to play nice. Note that changing the Jetty version is not presently an option, so let's not go there :).
I've also tried removing the artifact, disabling make, and trying to just connect the debugger, but it still gives me the same red baloon and error message, so evidently the JNDI (port 1099) part is required.
Does anyone see something I'm doing wrong, or know of something else I should do to get this to work?
(I'm wondering if it is something similar to this: http://youtrack.jetbrains.com/issue/IDEA-65746 jboss issue)
Edit: Thanks to this google groups post I've discovered that it is possible to get the debugger connected if you don't specify Edit Configurations> + > jetty > remote, but instead choose Edit Configurations > + > remote, but debug and deploy is what I'm after so that's only a half solution.
Jetty remote configuration requires several manual steps which are performed automatically when you start Jetty directly from IDEA using the local configuration instead.
If you absolutely must use the remote configuration, try the following steps:
In the Remote staging section of the Server tab of the IDEA Jetty remote run configuration:
specify Same file system for Type and Host
specify path to the <Jetty home>/contexts folder in the Local path field of the contexts section
(settings will differ if you have Jetty running on another machine than IDEA, but I assume it's the same machine in your case)
Pass the following VM parameters to the Jetty process:
-Dcom.sun.management.jmxremote=
-Dcom.sun.management.jmxremote.port=<JNDI port>
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-DOPTIONS=jmx
<JNDI port> value should be the same as specified in the JNDI port field of the IDEA Jetty run configuration
Pass the following configuration files to the Jetty process (in the command line):
etc/jetty-jmx.xml
etc/jetty.xml
If you need to debug, you should also pass to Jetty process VM parameters taken from IDEA Jetty run configuration: Startup/Connection tab, select Debug list item under the To debug remote server JVM ...
Here is the sample command line to start Jetty process with all the required options:
java -Xdebug -Xrunjdwp:transport=dt_socket,address=60208,suspend=n,server=y -DSTOP.PORT=0 -Dcom.sun.management.jmxremote= -Dcom.sun.management.jmxremote.port=1099 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -DOPTIONS=jmx -Dfile.encoding=UTF-8 -classpath start.jar etc/jetty-jmx.xml etc/jetty.xml