Run environment :linux (CentOS 7), JDK 1.8, & ActiveMQ 5.15
I started Activemq then visit the management page with Chrome,when I try to log in with the default username & password I get the following error;
HTTP ERROR: 503
Problem accessing /admin/. Reason:
Service Unavailable Powered by Jetty://
How can I resolve this problem?
I was getting this same error. It turns out that I had run it as root user originally, then later I stopped it and ran it as a non-root user. Certain data files that had been created and owned by the original root instance were not accessible to the non-root user.
Check the ownership of the files, and change them if necessary to match the user that the broker is running as.
Had the same issue.
Maybe something went wrong the extraction of the package.
I downloaded this:
wget https://archive.apache.org/dist/activemq/5.15.0/apache-activemq-5.15.0-bin.tar.gz
and extracted it with:
sudo tar -zxvf apache-activemq-5.15.0-bin.tar.gz -C /opt
then it worked for me.
My two cents:
I start with the activemq in Ubuntu Repo, but then later change to binary package from official website.
In my case, the repo version left an /etc/default/activemq config file, which runs activemq with user "activemq". It turns out in previous experiments, I did not kill the old processes running under "activemq" when I start activemq under my own user name. There are two activemq processes running under different user names, and when connecting to admin console, I have a 503.
I delete the /etc/default/activemq file, and kill all activemq processes running under "activemq", then restart activemq with my user name, the 503 is gone.
Related
On a new installation of cassandra 3.0.20 on redhat 7 I can not list roles. I have tried the option of fixing /etc/alternatives/cassandra/cassandra.yaml with...
authenticator: PasswordAuthenticator
and then restart the service.
still when I run a simple command like LIST ROLES I get the following error.
cassandra#cqlsh> list roles;
Unauthorized: Error from server: code=2100 [Unauthorized] message="You have to be logged in and not anonymous to perform this request"
It turns out that systemctl was not completely stopping cassandra due to weirdness with Redhat 7 and the init file. Therefore the changes to my cassandra.yaml were not taking effect.
Once I killed cassandra, made a proper cassandra.service and restarted the desired settings took effect, and I am able to run operations like "LIST ROLES;" normally.
I have an working installation of Graylog 2.1 on Debian 8, but I had to install Graylog on CentOS 7 because my datacenter uses this distribution and I want to have same environment to avoid problems when I need to ask changes in production.
I follow guideline of Graylog for CentOS 7 available in http://docs.graylog.org/en/2.1/pages/installation/os/centos.html and installed Graylog 2.1.2. MongoDB, ElasticSearch e Graylog are running and answer to local requests via terminal. However, web interface is not available. Login page is presented, but when I try to connect using admin user, I receive this answer:
Error - the server returned: 404 - cannot POST http://mydomain:9000/api/system/sessions (404)
Below are lines that I changed into server.conf of Graylog (I replaced real IP address here):
rest_listen_url = http://4.8.15.16:9000/api/
rest_transport_uri = http://4.8.15.16:9000
web_listen_uri = http://4.8.15.16:9000/
I have searched for references about this fail and created a graylog-settings.json file based on suggestion of Graylog github issues, with this content:
"custom_attributes": {
"graylog-server": {
"rest_transport_url": false
}
}
But event after restarting server, the problem continues. Graylog log only shows INFO records, then it seems to me that requests are not reaching server. I would like to know if this is due to network configuration or can be solved by an adjustment of Graylog.
Your rest_transport_uri looks odd in comparison with rest_listen_uri. Make sure that you actually need to set rest_transport_uri at all and that it is the correct setting.
I don't know where you found information about graylog-settings.json, but that file is only being used in the official Omnibus package (i. e. the OVA and AMIs).
I'm trying to follow the steps in the RabbitMQ docs here to get clustering with SSL working on Windows. I'm noticing though that the "rabbitmqctl status" command starts failing after the environment variables defined in those steps are set. I'm getting the following error when executing "rabbitmqctl status":
Error: unable to connect to node 'rabbit#server1': nodedown
I've already configured RabbitMQ to use TLS 1.2 and have verified that it's working. I've ensured that my Erlang 18 cookie is the same in the user directory C:\users\me and C:\Windows on the machine, but the error persists, and is stopping other servers from clustering with it. The docs say that the Windows SSL Cluster setup is "Coming soon"... Here are the steps I've taken so far on server1. I think that Erlang wants forward slashes in the paths - this matches the rabbit.config SSL settings.
Combined the contents of my server\cert.pem and server\key.pem into rabbit.pem via the command "type server\cert.pem server\key.pem > server\rabbit.pem"
Created environment variable ERL_SSL_PATH and set to: "C:/Program
Files/erl7.0/lib/ssl-7.0/ebin"
Created environment variable RABBITMQ_CTL_ERL_ARGS and set to: -pa "%ERL_SSL_PATH%" -proto_dist inet_tls -ssl_dist_opt server_certfile C:/OpenSSL-Win64/server/rabbit.pem -ssl_dist_opt server_secure_renegotiate true client_secure_renegotiate true
Created environment variable RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS and set to same value as RABBITMQ_CTL_ERL_ARGS
Copied the erlang cookie at C:\Windows.erlang.cookie to my local user profile directory.
Restarted rabbit using rabbitmq-service start
At this point, on server1, "rabbitmqctl status" no longer works. Attempts to try to join server2 to server1 result in a "node down" error.
Edit 1: I can't get the initial step in the docs working to ask Erlang to report its SSL directory on Windows in order to set ERL_SSL_PATH correctly. Erlang is installed at C:\Program Files\erl7.0 on my server.
Edit 2: Using werl.exe (at C:\Program Files\erl7.0\bin\werl.exe), I was able to issue a command "Foo=io:format(code:lib_dir(ssl, ebin))." and it reported the path as: c:/Program Files/erl7.0/lib/ssl-7.0/ebin. However, this doesn't seem to be the cause of the this issue since that's already what I was using.
Thanks,
Andy
For environment changes to take effect on Windows, the service must be
re-installed. It is not sufficient to restart the service. This can be
done using the installer or on the command line with administrator
permissions
(source)
This will do:
rabbitmq-service.bat stop
rabbitmq-service.bat remove
rabbitmq-service.bat install
rabbitmq-service.bat start
Also, if while the node you're working on is down, the other cluster nodes were running, their state might be assumed to have gone out of sync. In that case, the node might fail to start up and you might need to:
rabbitmqctl force_boot
Check the logs to confirm. (at %RABBIT_BASE%\log\rabbit#server.log)
Late answer but, hopefully this could help a searcher...
I am using apache Archiva v. 2.2.0 under Windows Server 2012 R2, Java version 1.8.0_60 inside VirtualBox. It used to work for quite a long time before Windows autoupdate.
After Windows autoupdate I am getting an error message when going to archiva url: HTTP ERROR: 503 . Problem accessing /. Reason: Service Unavailable, Powered by Jetty://.
The Apache Archiva service is running. No error logs are generated. Restarting or even reinstalling of the service has no impact.
After rolling back of Windows update I restore the normal operation of Archiva, but mysteriously, just once, i.e. stopping and restarting of Archiva will cause the same HTTP ERROR 503.
The log file do not indicate any problem or error cuase.
Thank you for any tips.
I faced a similar issue.
I restarted archiva using ./path/to/archiva/apache-archiva-2.2.0/bin/archiva console
for you, since you are using windows .\bin\archiva.bat console
In my case I've found out that the jetty configuration file jetty.xml in ARCHIVA_BASE\conf got corrupted.
Solution:
Stop archiva service
Replace jetty.xml with either a fresh one or from last known working
backup. A fresh copy of jetty.xml can be downloaded from archiva web
site as an apache-archiva-2.2.0-bin.zip. File location within the
zip file is apache-archiva-2.2.1\conf\jetty.xml
Start archiva service
For me it was complaining about ClassDefNotFound errors, this was because I didn't set my JAVA_HOME properly (on Mac OS). After fixing this, the program worked. Maybe that was your issue.
We are continuously getting this error:
2014-11-06 07:05:34,460 [main ] INFO SharedFileLocker - Database activemq-data/localhost/KahaDB/lock is locked... waiting 10 seconds for the database to be unlocked. Reason: java.io.IOException: Failed to create directory 'activemq-data/localhost/KahaDB'
We have verified that activemq is running as activemq, we have verified that the owner of the directories are activemq. It will not create the directories automatically, and if we create them ourselves, it still gives the same error. The service starts fine, but it will just continuously spit out the same error. There is no lock file as it will not generate any files or directories.
Another way to fix this problem, in one step, is to create the missing symbolic link in /usr/share/activemq/. The permissions are already set properly on /var/cache/activemq/data/, but it seems the activemq RPM is not creating the symbolic link to that location as it should. The symbolic link should be as follows: /usr/share/activemq/activemq-data -> /var/cache/activemq/data/. After creating the symbolic link, restart the activemq service and the issue will be resolved.
I was able to resolve this by the following:
ensure activemq is owner and has access to /var/log/activemq and all sub dirs.
ensure /etc/init.d/activemq has: ACTIVEMQ_CONFIGS="/etc/sysconfig/activemq"
create file activemq in /etc/sysconfig if it doesnt exist.
add this line: ACTIVEMQ_DATA="/var/log/activemq/activemq-data/localhost/KahaDB"
The problem was that activeMQ 5.9.x was using /usr/share/activemq as its KahaDB location.