Unable to restart Rabbitmq: badarg rpc.erl line 206 - rabbitmq

My RabbitMQ server seems to be running all the connections from Celery are refused.
Here is my status:
(venv) root#xyz:/var/log/rabbitmq# sudo -u rabbitmq rabbitmqctl status
Status of node rabbit#xyz ...
[{pid,673},
{running_applications,[{rabbit,"RabbitMQ","3.4.3"},
{os_mon,"CPO CXC 138 46","2.2.14"},
{xmerl,"XML parser","1.3.5"},
{sasl,"SASL CXC 138 11","2.3.4"},
{stdlib,"ERTS CXC 138 10","1.19.4"},
{kernel,"ERTS CXC 138 10","2.16.4"}]},
{os,{unix,linux}},
{erlang_version,"Erlang R16B03 (erts-5.10.4) [source] [64-bit] [smp:4:2] [async-threads:30] [kernel-poll:true]\n"},
{memory,[{total,40492248},
{connection_readers,0},
{connection_writers,0},
{connection_channels,0},
{connection_other,6856},
{queue_procs,2704},
{queue_slave_procs,0},
{plugins,0},
{other_proc,13684848},
{mnesia,0},
{mgmt_db,0},
{msg_index,2263632},
{other_ets,766920},
{binary,2761320},
{code,16179613},
{atom,561761},
{other_system,4264594}]},
{alarms,[]},
{listeners,[]},
{vm_memory_high_watermark,0.4},
{vm_memory_limit,858993459},
{disk_free_limit,50000000},
{disk_free,21639958528},
{file_descriptors,[{total_limit,924},
{total_used,3},
{sockets_limit,829},
{sockets_used,1}]},
{processes,[{limit,1048576},{used,106}]},
{run_queue,0},
{uptime,1619146}]
Here is what I get when I try to restart:
root#xyz:/var/log/rabbitmq# sudo -u rabbitmq rabbitmqctl stop
Stopping and halting node rabbit#xyz ...
Error: {badarg,[{erlang,group_leader,[undefined,<5172.10942.32>],[]},
{rabbit_log,with_local_io,1,[]},
{rabbit,stop_and_halt,0,[]},
{rpc,'-handle_call_call/6-fun-0-',5,
[{file,"rpc.erl"},{line,205}]}]}
Nothing outputs in the logs...
How can I restart my RabbitMQ server?

Related

yaws built with crypto fails to start

I need to use hashed passwords for authentication in Yaws.
I've rebuilt it from source (https://github.com/klacke/yaws), with this sequence of commands:
./configure --enable-crypto --prefix=/some/local/path
make install
When I run yaws (/some/local/path/bin/yaws -i -erlarg "-boot start_sasl"), I get this error:
{"init terminating in do_boot",{{badmatch,{'EXIT',{badarg,[{erlang,list_to_existing_atom,["crypto"],[]},{yaws,'-start_app_deps/0-fun-0-',2,[{file,"yaws.erl"},{line,264}]},{lists,foldl,3,[{file,"lists.erl"},{line,1263}]},{yaws,start_app_deps,0,[{file,"yaws.erl"},{line,263}]},{yaws,start,0,[{file,"yaws.erl"},{line,209}]},{init,start_em,1,[]},{init,do_boot,3,[]}]}}},[{yaws,start,0,[{file,"yaws.erl"},{line,209}]},{init,start_em,1,[]},{init,do_boot,3,[]}]}}
The crypto library is present:
checking for Erlang/OTP 'crypto' library subdirectory... /usr/local/Cellar/erlang/19.2/lib/erlang/lib/crypto-3.7.2
checking for Erlang/OTP 'crypto' library version... 3.7.2
What is this problem caused by? Do I need to pass some specific options to run the newly built Yaws server?
When I run make test, all tests pass.
EDIT
Starting yaws with bin/yaws -i -erlarg "-init_debug" yields this output:
{progress,preloaded}
{progress,kernel_load_completed}
{progress,modules_loaded}
{start,heart}
{start,error_logger}
{start,application_controller}
{progress,init_kernel_started}
{apply,{application,load,[{application,stdlib,[{description,"ERTS CXC 138 10"},{vsn,"3.2"},{id,[]},{modules,[array,base64,beam_lib,binary,c,calendar,dets,dets_server,dets_sup,dets_utils,dets_v8,dets_v9,dict,digraph,digraph_utils,edlin,edlin_expand,epp,eval_bits,erl_anno,erl_bits,erl_compile,erl_eval,erl_expand_records,erl_internal,erl_lint,erl_parse,erl_posix_msg,erl_pp,erl_scan,erl_tar,error_logger_file_h,error_logger_tty_h,escript,ets,file_sorter,filelib,filename,gb_trees,gb_sets,gen,gen_event,gen_fsm,gen_server,gen_statem,io,io_lib,io_lib_format,io_lib_fread,io_lib_pretty,lib,lists,log_mf_h,maps,math,ms_transform,orddict,ordsets,otp_internal,pool,proc_lib,proplists,qlc,qlc_pt,queue,rand,random,re,sets,shell,shell_default,slave,sofs,string,supervisor,supervisor_bridge,sys,timer,unicode,win32reg,zip]},{registered,[timer_server,rsh_starter,take_over_monitor,pool_master,dets]},{applications,[kernel]},{included_applications,[]},{env,[]},{maxT,infinity},{maxP,infinity}]}]}}
{progress,applications_loaded}
{apply,{application,start_boot,[kernel,permanent]}}
Erlang/OTP 19 [erts-8.2] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:true] [dtrace]
{apply,{application,start_boot,[stdlib,permanent]}}
{apply,{application,start_boot,[sasl,permanent]}}
=PROGRESS REPORT==== 20-Nov-2018::14:09:57 ===
supervisor: {local,sasl_safe_sup}
started: [{pid,<0.60.0>},
{id,alarm_handler},
{mfargs,{alarm_handler,start_link,[]}},
{restart_type,permanent},
{shutdown,2000},
{child_type,worker}]
=PROGRESS REPORT==== 20-Nov-2018::14:09:57 ===
supervisor: {local,sasl_sup}
started: [{pid,<0.59.0>},
{id,sasl_safe_sup},
{mfargs,
{supervisor,start_link,
[{local,sasl_safe_sup},sasl,safe]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
=PROGRESS REPORT==== 20-Nov-2018::14:09:57 ===
supervisor: {local,sasl_sup}
started: [{pid,<0.61.0>},
{id,release_handler},
{mfargs,{release_handler,start_link,[]}},
{restart_type,permanent},
{shutdown,2000},
{child_type,worker}]
{apply,{c,erlangrc,[]}}
=PROGRESS REPORT==== 20-Nov-2018::14:09:57 ===
application: sasl
started_at: nonode#nohost
{progress,started}
{"init terminating in do_boot",{{badmatch,{'EXIT',{badarg,[{erlang,list_to_existing_atom,["crypto"],[]},{yaws,'-start_app_deps/0-fun-0-',2,[{file,"yaws.erl"},{line,264}]},{lists,foldl,3,[{file,"lists.erl"},{line,1263}]},{yaws,start_app_deps,0,[{file,"yaws.erl"},{line,263}]},{yaws,start,0,[{file,"yaws.erl"},{line,209}]},{init,start_em,1,[]},{init,do_boot,3,[]}]}}},[{yaws,start,0,[{file,"yaws.erl"},{line,209}]},{init,start_em,1,[]},{init,do_boot,3,[]}]}}
init terminating in do_boot ()
It appears that the crypto service is not started.

Failed to deploy the latest rabbitmq?

I install erlang(erlang-20.3.7-1.el7.centos.x86_64.rpm),the installed rabbimq(rabbitmq-server-3.7.6). When I check the node status, rabbitmqctl got the
crush message.Have I lost something? Many appritiation!
=INFO REPORT==== 20-Jun-2018::11:11:00.813218 ===
application: logger
exited: {{shutdown,
{failed_to_start_child,'Elixir.Logger.ErrorHandler',noproc}},
{'Elixir.Logger.App',start,[normal,[]]}}
type: temporary
Could not start application logger: Logger.App.start(:normal, []) returned an error: shutdown: failed to start child: Logger.ErrorHandler
** (EXIT) no process: the process is not alive or there's no process currently associated with the given name, possibly because its application isn't started
I just installed rabbitmq-server v3.7.6 and If you are running debian-based distro, makes sure you have pinned your esl-erlang and erlang* package to 20.3.x (see here for more detail). Also, makes sure /var/lib/rabbitmq directory belongs to rabbitmq user and group. If not, use chown -R rabbitmq:rabbitmq /var/lib/rabbitmq to fix it.
And finally, just install one of these packages, esl-erlang or erlang.
I saw that you type erlang-20.3.7-1.el7.centos.x86_64.rpm, but the latest version for v20.3.x branch is v20.3.6. Please check it again...
Read the rabbitmq compatibility with erlang/otp here. I already tested using above configuration, and it works perfectly after I typed sudo rabbitmq-ctl status.
If success, you may see below result.
Status of node rabbit#eternalbox ...
[{pid,28772},
{running_applications,
[{rabbit,"RabbitMQ","3.7.6"},
{mnesia,"MNESIA CXC 138 12","4.15.3"},
{rabbit_common,
"Modules shared by rabbitmq-server and rabbitmq-erlang-client",
"3.7.6"},
{ranch_proxy_protocol,"Ranch Proxy Protocol Transport","1.5.0"},
{ranch,"Socket acceptor pool for TCP protocols.","1.5.0"},
{ssl,"Erlang/OTP SSL application","8.2.6"},
{public_key,"Public key infrastructure","1.5.2"},
{asn1,"The Erlang ASN1 compiler version 5.0.5","5.0.5"},
{os_mon,"CPO CXC 138 46","2.4.4"},
{crypto,"CRYPTO","4.2.2"},
{jsx,"a streaming, evented json parsing toolkit","2.8.2"},
{xmerl,"XML parser","1.3.16"},
{inets,"INETS CXC 138 49","6.5.1"},
{recon,"Diagnostic tools for production use","2.3.2"},
{lager,"Erlang logging framework","3.5.1"},
{goldrush,"Erlang event stream processor","0.1.9"},
{compiler,"ERTS CXC 138 10","7.1.5"},
{syntax_tools,"Syntax tools","2.1.4"},
{syslog,"An RFC 3164 and RFC 5424 compliant logging framework.","3.4.2"},
{sasl,"SASL CXC 138 11","3.1.2"},
{stdlib,"ERTS CXC 138 10","3.4.5"},
{kernel,"ERTS CXC 138 10","5.4.3"}]},
{os,{unix,linux}},
{erlang_version,
"Erlang/OTP 20 [erts-9.3.1] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:128] [hipe] [kernel-poll:true]\n"},
{memory,
[{connection_readers,0},
{connection_writers,0},
{connection_channels,0},
{connection_other,0},
{queue_procs,0},
{queue_slave_procs,0},
{plugins,5936},
{other_proc,19686048},
{metrics,184824},
{mgmt_db,0},
{mnesia,73040},
{other_ets,1882432},
{binary,57352},
{msg_index,28976},
{code,25081646},
{atom,1041593},
{other_system,12953873},
{allocated_unused,16013176},
{reserved_unallocated,2306048},
{strategy,rss},
{total,[{erlang,60995720},{rss,79314944},{allocated,77008896}]}]},
{alarms,[]},
{listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]},
{vm_memory_calculation_strategy,rss},
{vm_memory_high_watermark,0.4},
{vm_memory_limit,6579339264},
{disk_free_limit,50000000},
{disk_free,118526930944},
{file_descriptors,
[{total_limit,924},{total_used,2},{sockets_limit,829},{sockets_used,0}]},
{processes,[{limit,1048576},{used,211}]},
{run_queue,0},
{uptime,87},
{kernel,{net_ticktime,60}}]
Good luck!
because you are having the wrong version of the erlang to install it properly you need the latest erlang version 20.3 because the 21 version only works with rabbitmq version 3.7.6 beta.
To install rabbitmq and its dependency
sudo apt install rabbitmq-server erlang-base=1:20.3-1 erlang-syntax-tools=1:20.3-1 erlang-asn1=1:20.3-1 erlang-crypto=1:20.3-1 erlang-mnesia=1:20.3-1 erlang-runtime-tools=1:20.3-1 erlang-public-key=1:20.3-1 erlang-ssl=1:20.3-1 erlang-diameter=1:20.3-1 erlang-inets=1:20.3-1 erlang-xmerl=1:20.3-1 erlang-edoc=1:20.3-1 erlang-eldap=1:20.3-1 erlang-erl-docgen=1:20.3-1 erlang-eunit=1:20.3-1 erlang-ic=1:20.3-1 erlang-inviso=1:20.3-1 erlang-odbc=1:20.3-1 erlang-snmp=1:20.3-1 erlang-os-mon=1:20.3-1 erlang-parsetools=1:20.3-1 erlang-percept=1:20.3-1 erlang-ssh=1:20.3-1 erlang-tools=1:20.3-1 erlang-nox=1:20.3-1

Automatically supply yes to phx.new prompts

I'm running this containerized instance of Phoenix.
The documentation says the following command can be run, but gives the error:
root#890ba3f1be37:/code# mix phx.new hello -y
** (Mix) Invalid option: -y
The environmental details are:
root#890ba3f1be37:/code# mix --version
Erlang/OTP 20 [erts-9.1] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:10] [kernel-poll:false]
Mix 1.5.2
root#890ba3f1be37:/code# elixir --version
Erlang/OTP 20 [erts-9.1] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:10] [kernel-poll:false]
Elixir 1.5.2
root#890ba3f1be37:/code# mix phx.new --version
Phoenix v1.3.0
Am I missing something here?
I believe the documentation is incorrect as the mix task unconditionally calls Mix.shell.yes?. You can instead pipe echo yes into mix phx.new ... to automatically respond to the prompt with yes.
echo yes | mix phx.new foo

Apache Ignite nodes cannot communicate

I've configured Apache Ignite 1.8.0 programmatically and can start a server with a single node, but when another node joins, they cannot communicate and I receive many of the following two messages in the logs. These continue until the other node is stopped.
ERROR 12:52:39,187-0800 [*Initialization*] util.nio.GridDirectParser: Failed to read message [msg=null, buf=java.nio.DirectByteBuffer[pos=5 lim=420 cap=32768], reader=null, ses=GridSelectorNioSessionImpl [selectorIdx=0, queueSize=1, writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], readBuf=java.nio.DirectByteBuffer[pos=5 lim=420 cap=32768], recovery=null, super=GridNioSessionImpl [locAddr=/10.97.184.106:5702, rmtAddr=/10.97.189.92:58788, createTime=1484945559174, closeTime=0, bytesSent=0, bytesRcvd=420, sndSchedTime=1484945559174, lastSndTime=1484945559174, lastRcvTime=1484945559185, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=o.a.i.i.util.nio.GridDirectParser#21e93eaf, directMode=true], GridConnectionBytesVerifyFilter], accepted=true]]]
class org.apache.ignite.IgniteException: Invalid message type: -84
at org.apache.ignite.internal.managers.communication.GridIoMessageFactory.create(GridIoMessageFactory.java:805)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$5.create(TcpCommunicationSpi.java:1631)
at org.apache.ignite.internal.util.nio.GridDirectParser.decode(GridDirectParser.java:76)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:104)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridConnectionBytesVerifyFilter.onMessageReceived(GridConnectionBytesVerifyFilter.java:113)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:2332)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:173)
at org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processRead(GridNioServer.java:918)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:1583)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1516)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1289)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
WARN 12:52:39,188-0800 [*Initialization*] communication.tcp.TcpCommunicationSpi: Failed to process selector key (will close): GridSelectorNioSessionImpl [selectorIdx=0, queueSize=1, writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], readBuf=java.nio.DirectByteBuffer[pos=5 lim=420 cap=32768], recovery=null, super=GridNioSessionImpl [locAddr=/10.97.184.106:5702, rmtAddr=/10.97.189.92:58788, createTime=1484945559174, closeTime=0, bytesSent=0, bytesRcvd=420, sndSchedTime=1484945559174, lastSndTime=1484945559174, lastRcvTime=1484945559185, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=o.a.i.i.util.nio.GridDirectParser#21e93eaf, directMode=true], GridConnectionBytesVerifyFilter], accepted=true]]
ERROR 12:52:39,189-0800 [*Initialization*] communication.tcp.TcpCommunicationSpi: Closing NIO session because of unhandled exception.
class org.apache.ignite.internal.util.nio.GridNioException: Invalid message type: -84
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:1595)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1516)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1289)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteException: Invalid message type: -84
at org.apache.ignite.internal.managers.communication.GridIoMessageFactory.create(GridIoMessageFactory.java:805)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$5.create(TcpCommunicationSpi.java:1631)
at org.apache.ignite.internal.util.nio.GridDirectParser.decode(GridDirectParser.java:76)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:104)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridConnectionBytesVerifyFilter.onMessageReceived(GridConnectionBytesVerifyFilter.java:113)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:2332)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:173)
at org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processRead(GridNioServer.java:918)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:1583)
... 4 more
Version information.
>>> +----------------------------------------------------------------------+
>>> Ignite ver. 1.8.0#20161205-sha1:9ca40dbeb7d559fcb299bdb6f5c90cdf8ce7e533
>>> +----------------------------------------------------------------------+
>>> OS name: Windows Server 2012 R2 6.3 amd64
>>> CPU(s): 2
>>> Heap: 3.6GB
>>> VM name: 13752#host
>>> Grid name: T-XXX
>>> Local node [ID=983EC5A0-2D9A-40C9-B4C3-3D59739BDDB9, order=1, clientMode=false]
>>> Local node addresses: [hostname.example.com/0:0:0:0:0:0:0:1, /10.97.184.106, /127.0.0.1]
>>> Local ports: TCP:5702 TCP:5703 TCP:5705
One of the similar issues I've found in my research is that it is recommended to disable the shared memory feature (setSharedMemoryPort -1) as a first step in removing a problem like this.
The server is running on Windows and the other server joining the cache is on OSX.
INFO 12:50:17,569-0800 [*Initialization*] ignite.internal.IgniteKernal%T-XXX: OS: Windows Server 2012 R2 6.3 amd64
How do I prevent these errors? Have I configured the cluster poorly or is there an incompatibility between the two machines I am using?
Very likely it's a misconfiguration issue. This can happen if discovery SPI on one node tries to connect to communication SPI on another node. See this post: http://apache-ignite-users.70518.x6.nabble.com/Invalid-message-type-84-error-td9869.html

Erlang "host key verification" error

I'm new in tsung and erlang and I want to distribution on tsung.
When I use this command:
ardic#base-64-arcsp:~/tsungtest$ erl -rsh ssh -name ardic#tsung -setcookie tsung"
Erlang R13B03 (erts-5.7.4) [source] [64-bit] [rq:1] [async-threads:0] [hipe] [kernel-poll:false]
Eshell V5.7.4 (abort with ^G)
(ardic#tsung)1> slave:start(tsungnode2,ardic,"-setcookie tsung").
"{error,timeout}
I took this error.
And I did everything in tsung FAQ about error, timeout.
Do you have any idea?
I met the same error on my virtual ubuntu, and at last I found: virtual ubuntu can be master but not be slave (another centos can be slave or master ). I do not know why . I'll join the mail list to ask for more help.
=INFO REPORT==== 28-Mar-2012::01:00:09 ===
ts_os_mon_erlang:(3:<0.2713.0>) Fail to start beam on host "centos-181" ({error,
timeout})
=ERROR REPORT==== 28-Mar-2012::01:00:09 ===
** Generic server <0.2713.0> terminating
** Last message in was {timeout,#Ref<0.0.0.26000>,start_beam}
** When Server state == {state,{global,ts_mon},
10000,undefined,"centos-181",undefined}
** Reason for termination ==
** {error,timeout}