I know this isn't the right place to ask this but this site has the most users on it. I recently bought a Kemp LoadMaster LM-2600 Load Balancer for my webservers. However, this unit didn't include an SSD because the previous owner decided to erase it. So, I downloaded the VirtualBox version of free VLM from kemp's website. Then, I used VBoxManage clonehd LMOS.vmdk LMOS.img --format RAW to turn the disk into a raw img file. Then, I used dd if=LMOS.img of=/dev/sdb to flash a USB with the os. Then, I booted my loadmaster with the USB.
The boot process went like normal until it finished booting and then the machine switched to runlevel 0 (Shutdown)
This is the logs I got when I plugged the USB into my computer (The log file was so big that stack overflow won't allow me to paste it here):
https://pastebin.com/5PbKzRi6
I noticed that it said something about eth0 being down so I plugged in an ethernet cable and booted it again. The same thing happened but I got a different error (The log was shorter so I labeled it):
-- BOOT --
2022-08-07T19:50:06+00:00 lb100 syslog-ng: syslog-ng starting up; version='3.25.1'
-- ERROR --
2022-08-07T19:50:07+00:00 lb100 raid_events_handler: RAID controller not detected yet (check # 0)
-- LOGIN --
2022-08-07T19:50:11+00:00 lb100 login: pam_unix(login:session): session opened for user bal by LOGIN(uid=0)
-- ERROR --
2022-08-07T19:50:14+00:00 lb100 raid_events_handler: RAID controller not detected yet (check # 1)
-- SHUTDOWN --
2022-08-07T19:50:15+00:00 lb100 init: Switching to runlevel: 0
2022-08-07T19:50:15+00:00 lb100 kernel: S99final (938): drop_caches: 1
2022-08-07T19:50:17+00:00 lb100 syslog-ng: syslog-ng shutting down; version='3.25.1'
2022-08-07T19:50:17+00:00 lb100 kernel: Kernel logging (proc) stopped.
2022-08-07T19:50:17+00:00 lb100 kernel: Kernel log daemon terminating.
2022-08-07T19:50:17+00:00 lb100 sslproxy: (815) caught signal 15
2022-08-07T19:50:17+00:00 lb100 raid_events_handler: stop
I have no idea what to do right now. I already tried everything I knew. What should I do?
Any help would be great,
Thanks!
Trying to sort out why my local RabbitMQ is not starting.
I had an issue with a previous version of RabbitMQ on the system not starting, so I decided to uninstall it and reinstall using chocolaty. The service wasn't starting after having quite a few messages in the queue, system going to sleep and restarting multiple times... The uninstall did remove all the files from the AppData\Roaming\RabbitMQ directory, the service wasn't running, and the system was rebooted.
Currently have RabbitMQ 3.8.2, which installed with Erlang20.0
Here's the snipped from the rabbit log file:
=INFO REPORT==== 22-Jan-2020::19:39:24 ===
Starting RabbitMQ 3.6.11 on Erlang 20.0
Copyright (C) 2007-2017 Pivotal Software, Inc.
Licensed under the MPL. See http://www.rabbitmq.com/
=INFO REPORT==== 22-Jan-2020::19:39:24 ===
node : rabbit#myhostname
home dir : C:\WINDOWS
config file(s) : c:/Users/username/AppData/Roaming/RabbitMQ/rabbitmq.config
cookie hash : a hash goes here
log : C:/Users/username/AppData/Roaming/RabbitMQ/log/RABBIT~1.LOG
sasl log : C:/Users/username/AppData/Roaming/RabbitMQ/log/RABBIT~2.LOG
database dir : c:/Users/username/AppData/Roaming/RabbitMQ/db/RABBIT~1
=INFO REPORT==== 22-Jan-2020::19:39:25 ===
RabbitMQ hasn't finished starting yet. Waiting for startup to finish before stopping...
=INFO REPORT==== 22-Jan-2020::19:39:31 ===
Memory high watermark set to 6505 MiB (6821275238 bytes) of 16263 MiB (17053188096 bytes) total
=INFO REPORT==== 22-Jan-2020::19:39:31 ===
Enabling free disk space monitoring
=INFO REPORT==== 22-Jan-2020::19:39:31 ===
Disk free limit set to 50MB
=INFO REPORT==== 22-Jan-2020::19:39:31 ===
Limiting to approx 8092 file handles (7280 sockets)
=INFO REPORT==== 22-Jan-2020::19:39:31 ===
FHC read buffering: OFF
FHC write buffering: ON
=INFO REPORT==== 22-Jan-2020::19:39:31 ===
Waiting for Mnesia tables for 30000 ms, 9 retries left
=INFO REPORT==== 22-Jan-2020::19:39:31 ===
Waiting for Mnesia tables for 30000 ms, 9 retries left
=INFO REPORT==== 22-Jan-2020::19:39:31 ===
Priority queues enabled, real BQ is rabbit_variable_queue
=INFO REPORT==== 22-Jan-2020::19:39:52 ===
Error description:
{could_not_start,rabbit,
{error,
{{shutdown,
{failed_to_start_child,rabbit_epmd_monitor,
{{badmatch,noport},
[{rabbit_epmd_monitor,init,1,
[{file,"src/rabbit_epmd_monitor.erl"},{line,56}]},
{gen_server,init_it,2,
[{file,"gen_server.erl"},{line,365}]},
{gen_server,init_it,6,
[{file,"gen_server.erl"},{line,333}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,247}]}]}}},
{child,undefined,rabbit_epmd_monitor_sup,
{rabbit_restartable_sup,start_link,
[rabbit_epmd_monitor_sup,
{rabbit_epmd_monitor,start_link,[]},
false]},
transient,infinity,supervisor,
[rabbit_restartable_sup]}}}}
Log files (may contain more information):
C:/Users/username/AppData/Roaming/RabbitMQ/log/RABBIT~1.LOG
C:/Users/username/AppData/Roaming/RabbitMQ/log/RABBIT~2.LOG
=ERROR REPORT==== 22-Jan-2020::19:39:53 ===
Error trying to stop RabbitMQ: error:{badmatch,false}
=INFO REPORT==== 22-Jan-2020::19:39:53 ===
Halting Erlang VM with the following applications:
sasl
stdlib
kernel
Not a lot of help to a new RabbitMQ user trying to get an install working.
This is the first few lines from the erl_crash.dump file in the same dir as the logs:
=erl_crash_dump:0.3
Wed Jan 22 20:38:13 2020
Slogan: init terminating in do_boot ({undef,[{rabbit_nodes_common,make,rabbit#myhostname,[]},{rabbit_prelaunch,start,0,[{_},{_}]},{init,start_em,1,[{_},{_}]},{init,do_boot,3,[{_},{_}]}]})
System version: Erlang/OTP 20 [erts-9.0] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:10]
Compiled: Tue Jun 20 19:49:32 2017
I've been going through the the docs here, but haven't found much of a solution to this.
The Data Node and Node Manager is not starting in pseudo-cluster mode (Apache Hadoop).
Seeing this error in the log file:
***2017-08-22 17:15:08,403 ERROR org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Unexpected error starting NodeStatusUpdater
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved SHUTDOWN signal from Resourcemanager ,Registration of NodeManager failed, Message from ResourceManager: NodeManager from archit doesn't satisfy minimum allocations, Sending SHUTDOWN signal to the NodeManager.***
at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:278)
at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543)
2017-08-22 17:15:08,404 INFO org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl failed in state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
The "ResourceManager: NodeManager from archit doesn't satisfy minimum allocations" error is seen when node on which node manager is being started does not have enough resources w.r.t yarn.scheduler.minimum-allocation-vcores and yarn.scheduler.minimum-allocation-mb configurations.
Reduce values of yarn.scheduler.minimum-allocation-vcores and yarn.scheduler.minimum-allocation-mb then restart yarn.
I have a working client application under spring-boot 1.0.1, but when I update the spring-boot version to 1.1.3.RELEASE, I get a periodic Connection Reset stack trace on the client, and I can see the following log on the server:
=INFO REPORT==== 3-Jul-2014::10:57:55 ===
accepting AMQP connection <0.3945.0> (192.168.100.14:64049 -> 192.168.100.116:5672)
=ERROR REPORT==== 3-Jul-2014::10:57:58 ===
closing AMQP connection <0.3945.0> (192.168.100.14:64049 -> 192.168.100.116:5672):
{handshake_error,opening,0,
{amqp_error,access_refused,
"access to vhost 'dev-lmu' refused for user 'hermes'",
'connection.open'}}
I think it's fair to set the premise that permission issues are out of the question, because the app works under boot 1.0.1
I use RabbitMQ 3.3.4
Has anyone else run into this issue?
Looks like this was bug in boot but it has since been fixed (upgrade to 1.1.4)
https://github.com/spring-projects/spring-boot/commit/ad1636fd349b2e6636837d98af1ba1d07500ec9f#diff-19dc1e9553b1605c75168e38dcbc9477
Removed the leading '/' from the virtual host.
The relevant boot issue is: https://github.com/spring-projects/spring-boot/issues/1206
I'm having some trouble with keeping RabbitMQ up.
I start it via the provided /etc/init.d/rabbitmq-server start, and it starts up fine. status shows that it's fine.
But after a while, the server dies. status prints
Error: unable to connect to node 'rabbit#myserver': nodedown
Checking the log file, it seems I've reached the memory threshold. Here are the logs:
# start
=INFO REPORT==== 26-Mar-2014::03:24:13 ===
Limiting to approx 924 file handles (829 sockets)
=INFO REPORT==== 26-Mar-2014::03:24:13 ===
Memory limit set to 723MB of 1807MB total.
=INFO REPORT==== 26-Mar-2014::03:24:13 ===
Disk free limit set to 953MB
=INFO REPORT==== 26-Mar-2014::03:24:13 ===
Management plugin upgraded statistics to fine.
=INFO REPORT==== 26-Mar-2014::03:24:13 ===
msg_store_transient: using rabbit_msg_store_ets_index to provide index
=INFO REPORT==== 26-Mar-2014::03:24:13 ===
msg_store_persistent: using rabbit_msg_store_ets_index to provide index
=WARNING REPORT==== 26-Mar-2014::03:24:13 ===
msg_store_persistent: rebuilding indices from scratch
=INFO REPORT==== 26-Mar-2014::03:24:27 ===
started TCP Listener on [::]:5672
=INFO REPORT==== 26-Mar-2014::03:24:27 ===
Management agent started.
=INFO REPORT==== 26-Mar-2014::03:24:27 ===
Management plugin started. Port: 55672, path: /
=INFO REPORT==== 26-Mar-2014::03:24:39 ===
accepting AMQP connection <0.1999.0> (127.0.0.1:34788 -> 127.0.0.1:5672)
=WARNING REPORT==== 26-Mar-2014::03:24:40 ===
closing AMQP connection <0.1999.0> (127.0.0.1:34788 -> 127.0.0.1:5672):
connection_closed_abruptly
=INFO REPORT==== 26-Mar-2014::03:24:42 ===
accepting AMQP connection <0.2035.0> (127.0.0.1:34791 -> 127.0.0.1:5672)
=INFO REPORT==== 26-Mar-2014::03:24:46 ===
accepting AMQP connection <0.2072.0> (127.0.0.1:34792 -> 127.0.0.1:5672)
=INFO REPORT==== 26-Mar-2014::03:25:19 ===
vm_memory_high_watermark set. Memory used:768651448 allowed:758279372
=INFO REPORT==== 26-Mar-2014::03:25:19 ===
alarm_handler: {set,{{resource_limit,memory,'rabbit#myserver'},
[]}}
=INFO REPORT==== 26-Mar-2014::03:25:48 ===
Statistics database started.
# server dies here
I seem to have been reaching the memory threshold, but reading the docs, it shouldn't shutdown the server? Just prevent publishing until some memory is freed up?
And yes, I am aware that my celery workers are the cause of the memory usage, I'd just thought that RabbitMQ would handle it correctly, which the docs seem to imply. So I'm doing something wrong?
EDIT: Refactored my task so it's message is just a single string (max 15 chars). Doesn't seem to be making any difference.
I tried starting RabbitMQ and celery worker --purge, with no events coming in to trigger the tasks, but it seems RabbitMQ's memory usage still steadily climbs to 40%. It then crashes shortly afterwards. It crashes, with none of my tasks having the chance to run.
Updating RabbitMQ to official stable version fixed the issue. The RabbitMQ package in Ubuntu 12.04's repository was really old.