Rabbitmq with erlang-base-hipe installed on Ubuntu 14.04 starts fine, but logs several of these messages in the /var/log/rabbitmq/startup_log file
Error: function_clause
Stack trace: [{error_logger_file_h,write_event,
[{<0.4819.0>,"/var/log/rabbitmq/rabbit#rab04.log",[]},
{info_msg,<0.31.0>,
{<0.2.0>,"HiPE in use: compiled ~B modules in ~Bs.~n",
"9\e"}}],
[{file,"error_logger_file_h.erl"},{line,114}]},
{error_logger_file_h,handle_event,2,
[{file,"error_logger_file_h.erl"},{line,79}]},
{rabbit_error_logger_file_h,safe_handle_event,3,[]},
{gen_event,server_update,4,[{file,"gen_event.erl"},{line,538}]},
{gen_event,server_notify,4,[{file,"gen_event.erl"},{line,520}]},
{gen_event,server_notify,4,[{file,"gen_event.erl"},{line,522}]},
{gen_event,handle_msg,5,[{file,"gen_event.erl"},{line,261}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,240}]}]
RabbitMQ works fine, however, we no longer get information written to the error log file.
The change is that we installed erlang-base-hipe and modified /etc/rabbitmq/rabbitmq.config with
{hipe_compile, true}
The permissions on /var/log/rabbitmq/rabbit#rab04.log are 644 and owned by the user and group rabbitmq which is the user configured for the rabbitmq service.
Suggestions on how to debug and or repair this problem are appreciated.
Related
(Windows, JDK8, and ARTEMIS_HOME set.) I downloaded v2.5.0, created a broker, and ran it.
artemis.cmd create broker1, specify login info, cd broker1 and bin\artemis.cmd run
(Understood that instance suggested not to be under ARTEMIS_HOME dir.) The webconsole renders and I can access it via localhost:8161/console. But trying to login, I get a Server Error on the web page, and the CLI shows
[org.eclipse.jetty.server.HttpChannel] /console/auth/login/:java.lang.SecurityException: java.io.IOException: \login.config (No such file or directory)
The file broker1/etc/login.config does exist. I have tried running from various directories and explicitly stating the configuration.
cd broker1/bin, artemis.cmd run -- xml:artemis-ervice.xml
But same issue. Why can't this login.config be recognized?
I believe there's a bug in the artemis.profile.cmd. It's using this:
-Djava.security.auth.login.config=%ARTEMIS_ETC_INSTANCE%\login.config
But the %ARTEMIS_ETC_INSTANCE% variable is not defined. I believe it should be using %ARTEMIS_INSTANCE_ETC_URI% instead. Can you try this? If that fixes the issue then I'll open a JIRA and sent a PR to get it fixed permanently.
This is driving me nuts.
I'm setting up airflow in a cloud environment. I have one server running the scheduler and the webserver and one server as a celery worker, and I'm using airflow 1.8.0.
Running jobs works fine. What refuses to work is logging.
I've set up the correct path in airflow.cfg on both servers:
remote_base_log_folder = s3://my-bucket/airflow_logs/
remote_log_conn_id = s3_logging_conn
I've set up s3_logging_conn in the airflow UI, with the access key and the secret key as described here.
I checked the connection using
s3 = airflow.hooks.S3Hook('s3_logging_conn')
s3.load_string('test','test',bucket_name='my-bucket')
This works on both servers. So the connection is properly set up. Yet all I get whenever I run a task is
*** Log file isn't local.
*** Fetching here: http://*******
*** Failed to fetch log file from worker.
*** Reading remote logs...
Could not read logs from s3://my-bucket/airflow_logs/my-dag/my-task/2018-02-15T21:46:47.577537
I tried manually uploading the log following the expected conventions and the webserver still can't pick it up - so the problem is on both ends. I'm at a loss at what to do, everything I've read so far tells me this should be working. I'm close to just installing the 1.9.0 which I hear changes logging and see if I'm more lucky.
UPDATE: I made a clean install of Airflow 1.9 and followed the specific instructions here.
Webserver won't even start now with the following error:
airflow.exceptions.AirflowConfigException: section/key [core/remote_logging] not found in config
There is an explicit reference to this section in this config template.
So I tried removing it and just loading the S3 handler without checking first and I got the following error message instead:
Unable to load the config, contains a configuration error.
Traceback (most recent call last):
File "/usr/lib64/python3.6/logging/config.py", line 384, in resolve:
self.importer(used)
ModuleNotFoundError: No module named
'airflow.utils.log.logging_mixin.RedirectStdHandler';
'airflow.utils.log.logging_mixin' is not a package
I get the feeling that this shouldn't be this hard.
Any help would be much appreciated, cheers
Solved:
upgraded to 1.9
ran the steps described in this comment
added
[core]
remote_logging = True
to airflow.cfg
ran
pip install --upgrade airflow[log]
Everything's working fine now.
I am trying to install rabbitmq. The installation of both erlang i.e OTP 18.1 file was done successfulyl and also rabbitmq installation completed successfully. But when I try to connect rabbitmq, I get the following error:
C:\Program Files (x86)\RabbitMQ Server\rabbitmq_server-3.5.6\sbin>rabbitmq-plugins.bat enable rabbitmq_management
Plugin configuration unchanged.
Applying plugin configuration to rabbit#INLN50899724A... failed.
* Could not contact node rabbit#INLN50899724A.
Changes will take effect at broker restart.
* Options: --online - fail if broker cannot be contacted.
--offline - do not try to contact broker.
C:\Program Files (x86)\RabbitMQ Server\rabbitmq_server-3.5.6\sbin>rabbitmq-server restart
ERROR: epmd error for host INLN50899724A: address (cannot connect to host/port)
Click below to see the image containing error
Error Empd Rabbitmq
I may be replying really late, but still I'm facing this issue. So it may help somebody event while installing rabbitmq version 3.6.5. To change the node name, open "rabbitmq-env.bat" under "installation dir\sbin" and change RABBITMQ_NODENAME to "rabbit#localhost" (line number 90 in rabbitmq 3.6.5). But make sure you remove the windows service, do change the nodename, install service and start it. This worked for me.
No other options worked for me which were marked as right answer in stackoverflow!
Remove the RabbitMQ service. Uninstall RabbitMQ. Kill the epmd.exe process. Delete your c:\users\\AppData\Roaming\RabbitMQ Directory.
Go to Control Panels -> System -> Advanced -> Environment Variables
Add a variable named RABBITMQ_NODENAME and set it to rabbit#localhost
Reinstall RabbitMQ.
Navigate to the RabbitMQ sbin directory (or run the command from the start menu) and run rabbitmqctl status.
You should no longer see the (cannot connect to host/port) error.
And yes, this will fix your Cisco AnyConnect VPN related installation issues.
open C:\Program Files\RabbitMQ Server\rabbitmq_server-3.7.15\sbin\rabbitmq-server.bat
Add the below command as the first line in
set RABBITMQ_NODENAME=rabbit#localhost
refer attached image
For Windows Machine:
Go in C:\Users\<YourUserName>\AppData\Roaming\RabbitMQ
Create a file rabbitmq-env.conf
Add the following:
CONFIG_FILE=C:\Users\<YourUserName>\AppData\Roaming\RabbitMQ\rabbitmq
NODE_IP_ADDRESS=127.0.0.1
NODENAME=rabbit#localhost
The above is my env-config, for this particular issue setting
the nodename will be sufficient.
Turn off you firewall & start the rabbitmq, it will work. After running it one time, even if you turn on the firewall, it will work.
This works for me in Windows 10 machine.
in your shell
$ export RABBITMQ_NODENAME=rabbit#localhost
$ /sbin/rabbitmq-server -detached
change rabbit#INLN50899724A
to rabbit#localhost and try again.
Or, edits your hosts file so that INLN50899724A points to 127.0.0.1
For using rabbit mq on windows 10 for similar error I did below
set RABBITMQ_NODENAME=rabbit#localhost
in the path where rabbit MQ is installed i.e for me it was in C:\Program Files\RabbitMQ Server\rabbitmq_server-3.8.5\sbin>
and then started
.\rabbitmq-server start
Also, I had changed the host to point to my computer name in c:\Windows\System32\Drivers\etc
127.0.0.1 yourhostnamehere
I have a Glassfish 3.1.2 cluster.
I have 2 ssh nodes that each have 1 instance.
I added my lib jars in the DAS domains/mydomain/config/mycluster-config/lib/ directory.
When I restart my instances, I see that the jars are copied to each node in the:
nodes/node1/instance1/config/mycluster-config/lib/ directory and
nodes/node2/instance2/config/mycluster-config/lib/ directory.
My application is a JSF 2.2 app with Richfaces 4.3
The problem is that when I deploy my application, the application can't find any of the jars from my libs.
One question would be: How do I set the classpath for the nodes?
I have tried:
export LD_LIBRARY_PATH="/path/to/node1/instance1/config/prodc-config/lib"
and the same on command on the other node.
This did not enable my app to find the libs.
If I deploy my EAR to the standalone domain NOT the cluster then it will deploy without any errors.
When I deploy my app from the web admin console I check availability enabled and make sure the target is pointing to mycluster.
These are some of the errors I am getting:
WELD-000119 Not generating any bean definitions from com.my.domain.Validate because of underlying class loading error
Application was not properly initialized at startup, could not find Factory: javax.faces.context.FacesContextFactory. Attempting to find backup.
The cluster is always able to start.
The instances start and stop just fine.
The full message while deploying my EAR is:
Warning Command succeeded with Warning
"domain/applications/application/my_EAR" created successfully.
WARNING: Command _deploy did not complete successfully on server instance instance1: remote failure:
Failed to load the application on instance instance1.
The application will not run properly. Please fix your application and redeploy.
Exception while shutting down application container : java.lang.NullPointerException. Please see server.log for more details.
WARNING: Command _deploy did not complete successfully on server instance instance2: remote failure:
Failed to load the application on instance instance2. The application will not run properly. Please fix your application and redeploy.
Exception while shutting down application container : java.lang.NullPointerException. Please see server.log for more details.
WARNING: Command _deploy did not complete successfully on server instance instance1: remote failure:
Failed to load the application on instance instance1. The .... msg.seeServerLog
Thank you for any help with my problem.
For GlassFish v3.1.2 I have been using this link: http://docs.oracle.com/cd/E18930_01/html/821-2426/gkrdd.html#gksav
When an app is deployed I have to specify the libraries during deployment. The libraries are relative to the applibs directory. So for a cluster the path would be: ../../config/clustername-config/lib/util.jar
My problem must have been that I did not get the path correct when specifying this directory. That is what I get for not looking close enough the path I was using.
So, short answer: use --libraries when deploying app to a cluster and make sure the path is correct.
I tried everything I could think of, but I always get this error when I try to start the Apache FtpServer service.
Windows could not start the Apache FtpServer on Local Computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to service-specific error code 0.
Has anyone got Apache FtpServer to function as a Windows service in Windows 7 or even Windows Vista?
Note:
My Java_HOME environment variable is set as c:\java and I have a symbolic link, linking this directory to the longer C:\Program Files\Java\jdk1.6.0_17.
I also have another symbolic link pointing c:\ftp to the longer C:\Program Files (x86)\Apache Software Foundation\apache-ftpserver-1.0.3.
The only reason I did any of this in the first place is because I was reading that some people were having problems with spaces or long path names, but I tried physically moving the directories as well, all with the same error.
I had the same issue today using apache ftpser 1.0.5., the error message was "Failed installing 'ftpd' service".
The solution was to start the DOS command shell as "Administrator".