I installed devstack on ubuntu 16.04 running on virtual box. The first time everything came up and i was able to access all services. I turned off my VM and reopened it again and now the keystone service is not starting.
I have been reading lot of forums which say devstack installation is corrupted and i have to run stack.sh again. But isn't there any way to bring up the existing keystone service ? All the other services running.
I have tried "sudo systemctl start devstack#keystone.service" but it doesn't work.
Please provide a solution for this. Thanks!
Do you see below file after reboot?
ls -l /var/run/uwsgi/keystone-wsgi-public.socket
srw-rw-rw- 1 stack stack 0 May 9 09:19 keystone-wsgi-public.socket
This is socket file and it should get created during startup.
Related
I have installed the Hue on the Linux whixh is an instance from Azure. I have made all the required changes in ambari and hue.ini conf file. And when I run the supervisor job, it runs fine
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
hue 83933 sshuser 3u IPv4 15707246 0t0 TCP *:8000 (LISTEN)
But when I try to access the wb hue, I don't get any page loaded. It shows refused to connect.
Tried deleting caches and reset up was done.
I am using hue 4.7 version and I don't find any issues in error.log file. Yet, I don't see any data in access.log file. Could you please help me?
Do you have
http_host=0.0.0.0
in the hue.ini?
#Ruthikajawar here is a working hue.ini for ambari
https://github.com/steven-dfheinz/HDP3-Hue-Service/blob/Hue.4.6.0/configuration/live.hue.ini
I have noticed that sometimes, after initial install, it takes 1 or 2 restarts to get the WEBUI to work. I have also noticed sometimes, after a restart, it takes quite a few moments before the WEBUI starts to respond.
Give it some time after restart and check the WEBUI. If you still are not getting it to answer you need to check /var/log/hue/error.log as it should be very specific with errors causing the WEBUI to fail on startup.
I have a node express app running on Ubuntu with the pm2 service and nginx. All is fine starting up the pm2 service and my app is accessible. As soon as I navigate to another page on my app, pm2 list shows my app as status: errored. When I look at the error logs, I'm seeing a loop of:
Error: listen EADDRINUSE :::4300
I would think that I simply just need to find the service using port 4300 and kill it, but that's becoming a problem. When I run lsof -w -n -i tcp:4300 I do see a service running with an id of 23350. When I then run kill -15 23350, that obviously kills the process, but a new service is started immediately. I have tried stopping, starting and restarting pm2 with no luck. When I stop pm2, I can't even run npm run start as I get Error: listen EADDRINUSE :::4300.
Why is pm2 crashing upon navigating to another page on my app?
Why, when pm2 does crash, is it complaining about port 4300 already being in use?
I should note that when pm2 list reports status: errored, my app is still accessible except for when the form on the /contact page is submitted. It submits to itself with app.post('/contact'...), but in the browser, I see a blank white screen with 'Cannot post /contact'.
Thanks for your help in advance!
I had this same issue (killing node process but another one spawns) and finally resolved it. I may use some wrong UNIX terminology, please correct me if I do.
I was running pm2 on an AWS EC2 instance, and the OS was Ubuntu 20.04.
My problem was that it appears that the root user's pm2 processes are separate from the ubuntu (default) user's. When I was the default user and ran pm2 status, I saw my one stopped node process. When I switched to the root user using sudo su and ran pm2 status, I found another pm2 process running that wasn't listed when I was the default user! So I ran pm2 stop all as the root user and that solved my issue.
Steps:
sudo su # switch to root user
pm2 status # list all processes. For me this listed an extra running process that wasn't listed when I was the default user.
I just ran pm2 stop all to stop all the root user processes. You can also just stop the specific running process if you want with pm2 stop <name of pm2 process>.
Hopefully that helps someone!
Run environment :linux (CentOS 7), JDK 1.8, & ActiveMQ 5.15
I started Activemq then visit the management page with Chrome,when I try to log in with the default username & password I get the following error;
HTTP ERROR: 503
Problem accessing /admin/. Reason:
Service Unavailable Powered by Jetty://
How can I resolve this problem?
I was getting this same error. It turns out that I had run it as root user originally, then later I stopped it and ran it as a non-root user. Certain data files that had been created and owned by the original root instance were not accessible to the non-root user.
Check the ownership of the files, and change them if necessary to match the user that the broker is running as.
Had the same issue.
Maybe something went wrong the extraction of the package.
I downloaded this:
wget https://archive.apache.org/dist/activemq/5.15.0/apache-activemq-5.15.0-bin.tar.gz
and extracted it with:
sudo tar -zxvf apache-activemq-5.15.0-bin.tar.gz -C /opt
then it worked for me.
My two cents:
I start with the activemq in Ubuntu Repo, but then later change to binary package from official website.
In my case, the repo version left an /etc/default/activemq config file, which runs activemq with user "activemq". It turns out in previous experiments, I did not kill the old processes running under "activemq" when I start activemq under my own user name. There are two activemq processes running under different user names, and when connecting to admin console, I have a 503.
I delete the /etc/default/activemq file, and kill all activemq processes running under "activemq", then restart activemq with my user name, the 503 is gone.
I'm trying to follow the steps in the RabbitMQ docs here to get clustering with SSL working on Windows. I'm noticing though that the "rabbitmqctl status" command starts failing after the environment variables defined in those steps are set. I'm getting the following error when executing "rabbitmqctl status":
Error: unable to connect to node 'rabbit#server1': nodedown
I've already configured RabbitMQ to use TLS 1.2 and have verified that it's working. I've ensured that my Erlang 18 cookie is the same in the user directory C:\users\me and C:\Windows on the machine, but the error persists, and is stopping other servers from clustering with it. The docs say that the Windows SSL Cluster setup is "Coming soon"... Here are the steps I've taken so far on server1. I think that Erlang wants forward slashes in the paths - this matches the rabbit.config SSL settings.
Combined the contents of my server\cert.pem and server\key.pem into rabbit.pem via the command "type server\cert.pem server\key.pem > server\rabbit.pem"
Created environment variable ERL_SSL_PATH and set to: "C:/Program
Files/erl7.0/lib/ssl-7.0/ebin"
Created environment variable RABBITMQ_CTL_ERL_ARGS and set to: -pa "%ERL_SSL_PATH%" -proto_dist inet_tls -ssl_dist_opt server_certfile C:/OpenSSL-Win64/server/rabbit.pem -ssl_dist_opt server_secure_renegotiate true client_secure_renegotiate true
Created environment variable RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS and set to same value as RABBITMQ_CTL_ERL_ARGS
Copied the erlang cookie at C:\Windows.erlang.cookie to my local user profile directory.
Restarted rabbit using rabbitmq-service start
At this point, on server1, "rabbitmqctl status" no longer works. Attempts to try to join server2 to server1 result in a "node down" error.
Edit 1: I can't get the initial step in the docs working to ask Erlang to report its SSL directory on Windows in order to set ERL_SSL_PATH correctly. Erlang is installed at C:\Program Files\erl7.0 on my server.
Edit 2: Using werl.exe (at C:\Program Files\erl7.0\bin\werl.exe), I was able to issue a command "Foo=io:format(code:lib_dir(ssl, ebin))." and it reported the path as: c:/Program Files/erl7.0/lib/ssl-7.0/ebin. However, this doesn't seem to be the cause of the this issue since that's already what I was using.
Thanks,
Andy
For environment changes to take effect on Windows, the service must be
re-installed. It is not sufficient to restart the service. This can be
done using the installer or on the command line with administrator
permissions
(source)
This will do:
rabbitmq-service.bat stop
rabbitmq-service.bat remove
rabbitmq-service.bat install
rabbitmq-service.bat start
Also, if while the node you're working on is down, the other cluster nodes were running, their state might be assumed to have gone out of sync. In that case, the node might fail to start up and you might need to:
rabbitmqctl force_boot
Check the logs to confirm. (at %RABBIT_BASE%\log\rabbit#server.log)
Late answer but, hopefully this could help a searcher...
I used this build to install redis on my windows7:
https://github.com/rgl/redis (git)
http://ruilopes.com/redis-setup/binaries/redis-2.4.6-setup-64-bit.exe (binary)
Service has been installed sucessfully, but it doesn't start:
The message says: 'Redis Server service on local computer was started and then stopped'. logs folder is empty. redis-server.exe starts properly without service. How can I fix this? Propose any other working dist. plz if you know it.
Ran into a similar issue on Windows 10 when trying to start Redis v3.0.503 as a service.
I had to install the service with a service-name param and it magically started working.
C:\redis>redis-server --service-install redis.windows.conf --loglevel verbose --service-name redisService
[7484] 04 Feb 00:03:53.610 # Granting read/write access to 'NT AUTHORITY\Network Service' on: "C:\redis" "C:\redis" [7484] 04 Feb 00:03:53.612 # Redis successfully installed as a service.
Found the solution here:
Redis-windows GitHub Wiki - Issues might happen
Commonly the Redis server on windows fails to start if you don't specify a maxheap parameter, before installing the service try to edit the redis.windows.conf file and uncomment maxheap parameter to something suitable.