RabbitMQ Shovel failed to connect - rabbitmq
I have 2 rabbitmq box named : centos (192.168.1.115) and devserver (192.168.1.126)
in 'centos' I have :
I have queue named : toshovel bound to a topic exchange with routing key '#'
I test posting to the exchange and messages transfered to that queue.
In 'devserver' I have :
1. topic exchange named bino.topic
2. queue named : bino.nms.idc3d bound to bino.topic
This also tested. including using pika to publish message from 'centos' to 'devserver' so that I'm sure there is no firewall nor permition nor authentication (user/password : esx/esx) problem
Now I want to shovel from 'centos' to 'devserver'
I tried adding shovel in 'centos' per https://www.rabbitmq.com/shovel-dynamic.html
rabbitmqctl set_parameter shovel my-shovel '{"src-protocol": "amqp091", "src-uri": "amqp://esx:esx#192.168.1.115", "src-queue": "toshovel", "dest-protocol": "amqp091", "dest-uri": "amqp://esx:esx#192.168.126/", "dest-queue": "bino.nms.idc3d"}'
but the centos log said
from : /var/log/rabbitmq/rabbit\#centos.log
2018-06-20 14:03:21.800 [info] <0.735.0> terminating static worker with {timeout,{gen_server,call,[<0.763.0>,connect,60000]}}
2018-06-20 14:03:21.800 [error] <0.735.0> ** Generic server <0.735.0> terminating
** Last message in was {'$gen_cast',init}
** When Server state == {state,undefined,undefined,undefined,undefined,{<<"/">>,<<"my-shovel">>},dynamic,#{ack_mode => on_confirm,dest => #{dest_queue => <<"bino.nms.idc3d">>,fields_fun => #Fun<rabbit_shovel_parameters.11.26683091>,module => rabbit_amqp091_shovel,props_fun => #Fun<rabbit_shovel_parameters.12.26683091>,resource_decl => #Fun<rabbit_shovel_parameters.10.26683091>,uris => ["amqp://esx:esx#192.168.126/"]},name => <<"my-shovel">>,reconnect_delay => 5,shovel_type => dynamic,source => #{delete_after => never,module => rabbit_amqp091_shovel,prefetch_count => 1000,queue => <<"toshovel">>,resource_decl => #Fun<rabbit_shovel_parameters.14.26683091>,source_exchange_key => <<>>,uris => ["amqp://esx:esx#192.168.1.115"]}},undefined,undefined,undefined,undefined,undefined}
** Reason for termination ==
** {timeout,{gen_server,call,[<0.763.0>,connect,60000]}}
2018-06-20 14:03:21.800 [warning] <0.743.0> closing AMQP connection <0.743.0> (192.168.1.115:48223 -> 192.168.1.115:5672 - Shovel my-shovel, vhost: '/', user: 'esx'):
client unexpectedly closed TCP connection
2018-06-20 14:03:21.800 [error] <0.735.0> CRASH REPORT Process <0.735.0> with 1 neighbours exited with reason: {timeout,{gen_server,call,[<0.763.0>,connect,60000]}} in gen_server2:terminate/3 line 1166
2018-06-20 14:03:21.801 [error] <0.410.0> Supervisor {<0.410.0>,rabbit_shovel_dyn_worker_sup} had child {<<"/">>,<<"my-shovel">>} started with rabbit_shovel_worker:start_link(dynamic, {<<"/">>,<<"my-shovel">>}, [{<<"dest-protocol">>,<<"amqp091">>},{<<"dest-queue">>,<<"bino.nms.idc3d">>},{<<"dest-uri">>,<<"a...">>},...]) at <0.735.0> exit with reason {timeout,{gen_server,call,[<0.763.0>,connect,60000]}} in context child_terminated
2018-06-20 14:03:21.802 [error] <0.738.0> ** Generic server <0.738.0> terminating
** Last message in was {'EXIT',<0.735.0>,{timeout,{gen_server,call,[<0.763.0>,connect,60000]}}}
** When Server state == {state,amqp_network_connection,{state,#Port<0.29325>,<<"client 192.168.1.115:48223 -> 192.168.1.115:5672">>,10,<0.744.0>,131072,<0.737.0>,undefined,false},<0.742.0>,{amqp_params_network,<<"esx">>,<<"esx">>,<<"/">>,"192.168.1.115",5672,2047,0,10,60000,none,[#Fun<amqp_uri.12.90191702>,#Fun<amqp_uri.12.90191702>],[{<<"connection_name">>,longstr,<<"Shovel my-shovel">>}],[]},2047,[{<<"capabilities">>,table,[{<<"publisher_confirms">>,bool,true},{<<"exchange_exchange_bindings">>,bool,true},{<<"basic.nack">>,bool,true},{<<"consumer_cancel_notify">>,bool,true},{<<"connection.blocked">>,bool,true},{<<"consumer_priorities">>,bool,true},{<<"authentication_failure_close">>,bool,true},{<<"per_consumer_qos">>,bool,true},{<<"direct_reply_to">>,bool,true}]},{<<"cluster_name">>,longstr,<<"rabbit#centos">>},{<<"copyright">>,longstr,<<"Copyright (C) 2007-2018 Pivotal Software, Inc.">>},{<<"information">>,longstr,<<"Licensed under the MPL. See http://www.rabbitmq.com/">>},{<<"platform">>,longstr,<<"Erlang/OTP 20.3.4">>},{<<"product">>,longstr,<<"RabbitMQ">>},{<<"version">>,longstr,<<"3.7.5">>}],none,false}
** Reason for termination ==
** "stopping because dependent process <0.735.0> died: {timeout,\n {gen_server,call,\n [<0.763.0>,connect,\n 60000]}}"
2018-06-20 14:03:21.802 [error] <0.738.0> CRASH REPORT Process <0.738.0> with 0 neighbours exited with reason: "stopping because dependent process <0.735.0> died: {timeout,\n {gen_server,call,\n [<0.763.0>,connect,\n 60000]}}" in gen_server:handle_common_reply/8 line 726
2018-06-20 14:03:21.802 [error] <0.752.0> Supervisor {<0.752.0>,amqp_channel_sup} had child channel started with amqp_channel:start_link(network, <0.738.0>, 1, <0.753.0>, {<<"client 192.168.1.115:48223 -> 192.168.1.115:5672">>,1}) at <0.755.0> exit with reason {timeout,{gen_server,call,[<0.763.0>,connect,60000]}} in context child_terminated
2018-06-20 14:03:21.802 [error] <0.752.0> Supervisor {<0.752.0>,amqp_channel_sup} had child channel started with amqp_channel:start_link(network, <0.738.0>, 1, <0.753.0>, {<<"client 192.168.1.115:48223 -> 192.168.1.115:5672">>,1}) at <0.755.0> exit with reason reached_max_restart_intensity in context shutdown
2018-06-20 14:03:21.803 [error] <0.736.0> Supervisor {<0.736.0>,amqp_connection_sup} had child connection started with amqp_gen_connection:start_link(<0.737.0>, {amqp_params_network,<<"esx">>,<<"esx">>,<<"/">>,"192.168.1.115",5672,2047,0,10,60000,none,[#Fun<am..>,...],...}) at <0.738.0> exit with reason "stopping because dependent process <0.735.0> died: {timeout,\n {gen_server,call,\n [<0.763.0>,connect,\n 60000]}}" in context child_terminated
2018-06-20 14:03:21.803 [error] <0.736.0> Supervisor {<0.736.0>,amqp_connection_sup} had child connection started with amqp_gen_connection:start_link(<0.737.0>, {amqp_params_network,<<"esx">>,<<"esx">>,<<"/">>,"192.168.1.115",5672,2047,0,10,60000,none,[#Fun<am..>,...],...}) at <0.738.0> exit with reason reached_max_restart_intensity in context shutdown
2018-06-20 14:03:26.865 [info] <0.835.0> accepting AMQP connection <0.835.0> (192.168.1.115:47801 -> 192.168.1.115:5672)
2018-06-20 14:03:26.934 [info] <0.835.0> Connection <0.835.0> (192.168.1.115:47801 -> 192.168.1.115:5672) has a client-provided name: Shovel my-shovel
2018-06-20 14:03:26.935 [info] <0.835.0> connection <0.835.0> (192.168.1.115:47801 -> 192.168.1.115:5672 - Shovel my-shovel): user 'esx' authenticated and granted access to vhost '/'
2018-06-20 14:04:26.938 [info] <0.827.0> terminating static worker with {timeout,{gen_server,call,[<0.855.0>,connect,60000]}}
2018-06-20 14:04:26.938 [error] <0.827.0> ** Generic server <0.827.0> terminating
** Last message in was {'$gen_cast',init}
** When Server state == {state,undefined,undefined,undefined,undefined,{<<"/">>,<<"my-shovel">>},dynamic,#{ack_mode => on_confirm,dest => #{dest_queue => <<"bino.nms.idc3d">>,fields_fun => #Fun<rabbit_shovel_parameters.11.26683091>,module => rabbit_amqp091_shovel,props_fun => #Fun<rabbit_shovel_parameters.12.26683091>,resource_decl => #Fun<rabbit_shovel_parameters.10.26683091>,uris => ["amqp://esx:esx#192.168.126/"]},name => <<"my-shovel">>,reconnect_delay => 5,shovel_type => dynamic,source => #{delete_after => never,module => rabbit_amqp091_shovel,prefetch_count => 1000,queue => <<"toshovel">>,resource_decl => #Fun<rabbit_shovel_parameters.14.26683091>,source_exchange_key => <<>>,uris => ["amqp://esx:esx#192.168.1.115"]}},undefined,undefined,undefined,undefined,undefined}
** Reason for termination ==
** {timeout,{gen_server,call,[<0.855.0>,connect,60000]}}
2018-06-20 14:04:26.939 [warning] <0.835.0> closing AMQP connection <0.835.0> (192.168.1.115:47801 -> 192.168.1.115:5672 - Shovel my-shovel, vhost: '/', user: 'esx'):
client unexpectedly closed TCP connection
2018-06-20 14:04:26.939 [error] <0.827.0> CRASH REPORT Process <0.827.0> with 1 neighbours exited with reason: {timeout,{gen_server,call,[<0.855.0>,connect,60000]}} in gen_server2:terminate/3 line 1166
2018-06-20 14:04:26.939 [error] <0.410.0> Supervisor {<0.410.0>,rabbit_shovel_dyn_worker_sup} had child {<<"/">>,<<"my-shovel">>} started with rabbit_shovel_worker:start_link(dynamic, {<<"/">>,<<"my-shovel">>}, [{<<"dest-protocol">>,<<"amqp091">>},{<<"dest-queue">>,<<"bino.nms.idc3d">>},{<<"dest-uri">>,<<"a...">>},...]) at <0.827.0> exit with reason {timeout,{gen_server,call,[<0.855.0>,connect,60000]}} in context child_terminated
2018-06-20 14:04:26.940 [error] <0.830.0> ** Generic server <0.830.0> terminating
** Last message in was {'EXIT',<0.827.0>,{timeout,{gen_server,call,[<0.855.0>,connect,60000]}}}
** When Server state == {state,amqp_network_connection,{state,#Port<0.29425>,<<"client 192.168.1.115:47801 -> 192.168.1.115:5672">>,10,<0.836.0>,131072,<0.829.0>,undefined,false},<0.834.0>,{amqp_params_network,<<"esx">>,<<"esx">>,<<"/">>,"192.168.1.115",5672,2047,0,10,60000,none,[#Fun<amqp_uri.12.90191702>,#Fun<amqp_uri.12.90191702>],[{<<"connection_name">>,longstr,<<"Shovel my-shovel">>}],[]},2047,[{<<"capabilities">>,table,[{<<"publisher_confirms">>,bool,true},{<<"exchange_exchange_bindings">>,bool,true},{<<"basic.nack">>,bool,true},{<<"consumer_cancel_notify">>,bool,true},{<<"connection.blocked">>,bool,true},{<<"consumer_priorities">>,bool,true},{<<"authentication_failure_close">>,bool,true},{<<"per_consumer_qos">>,bool,true},{<<"direct_reply_to">>,bool,true}]},{<<"cluster_name">>,longstr,<<"rabbit#centos">>},{<<"copyright">>,longstr,<<"Copyright (C) 2007-2018 Pivotal Software, Inc.">>},{<<"information">>,longstr,<<"Licensed under the MPL. See http://www.rabbitmq.com/">>},{<<"platform">>,longstr,<<"Erlang/OTP 20.3.4">>},{<<"product">>,longstr,<<"RabbitMQ">>},{<<"version">>,longstr,<<"3.7.5">>}],none,false}
** Reason for termination ==
** "stopping because dependent process <0.827.0> died: {timeout,\n {gen_server,call,\n [<0.855.0>,connect,\n 60000]}}"
2018-06-20 14:04:26.940 [error] <0.830.0> CRASH REPORT Process <0.830.0> with 0 neighbours exited with reason: "stopping because dependent process <0.827.0> died: {timeout,\n {gen_server,call,\n [<0.855.0>,connect,\n 60000]}}" in gen_server:handle_common_reply/8 line 726
2018-06-20 14:04:26.941 [error] <0.844.0> Supervisor {<0.844.0>,amqp_channel_sup} had child channel started with amqp_channel:start_link(network, <0.830.0>, 1, <0.846.0>, {<<"client 192.168.1.115:47801 -> 192.168.1.115:5672">>,1}) at <0.847.0> exit with reason {timeout,{gen_server,call,[<0.855.0>,connect,60000]}} in context child_terminated
2018-06-20 14:04:26.941 [error] <0.844.0> Supervisor {<0.844.0>,amqp_channel_sup} had child channel started with amqp_channel:start_link(network, <0.830.0>, 1, <0.846.0>, {<<"client 192.168.1.115:47801 -> 192.168.1.115:5672">>,1}) at <0.847.0> exit with reason reached_max_restart_intensity in context shutdown
2018-06-20 14:04:26.941 [error] <0.828.0> Supervisor {<0.828.0>,amqp_connection_sup} had child connection started with amqp_gen_connection:start_link(<0.829.0>, {amqp_params_network,<<"esx">>,<<"esx">>,<<"/">>,"192.168.1.115",5672,2047,0,10,60000,none,[#Fun<am..>,...],...}) at <0.830.0> exit with reason "stopping because dependent process <0.827.0> died: {timeout,\n {gen_server,call,\n [<0.855.0>,connect,\n 60000]}}" in context child_terminated
2018-06-20 14:04:26.942 [error] <0.828.0> Supervisor {<0.828.0>,amqp_connection_sup} had child connection started with amqp_gen_connection:start_link(<0.829.0>, {amqp_params_network,<<"esx">>,<<"esx">>,<<"/">>,"192.168.1.115",5672,2047,0,10,60000,none,[#Fun<am..>,...],...}) at <0.830.0> exit with reason reached_max_restart_intensity in context shutdown
from /var/log/rabbitmq/log/crash.log
2018-06-20 14:04:40 =SUPERVISOR REPORT====
Supervisor: {<0.914.0>,amqp_connection_sup}
Context: shutdown
Reason: reached_max_restart_intensity
Offender: [{pid,<0.916.0>},{name,connection},{mfargs,{amqp_gen_connection,start_link,[<0.915.0>,{amqp_params_network,<<"esx">>,<<"esx">>,<<"/">>,"192.168.1.115",5672,2047,0,10,60000,none,[#Fun<amqp_uri.12.90191702>,#Fun<amqp_uri.12.90191702>],[{<<"connection_name">>,longstr,<<"Shovel my-shovel">>}],[]}]}},{restart_type,intrinsic},{shutdown,brutal_kill},{child_type,worker}]
Kindly please give me some clue
Related
Detox launches emulator but can't install the app on it afterwards: DetoxRuntimeError: Aborted detox.init() execution, and now running detox.cleanup()
When I launch a detox test with an emulator(one that I specify in detox config file) already running, detox finds the emulator, attaches it to it, and runs tests on it. When I don't have an emulator running, it helps load the emulator itself. But then when detox tries to install the app on the emulator next, it gets stuck and then eventually fails. Trace logs(trimmed out artifacts config from the log for brevity). Goto the log entry related to ALLOCATE_DEVICE after which it should've installed the device. $ detox test -c android.emu.release --loglevel trace 00:24:21.224 detox[17068] INFO: [test.js] DETOX_CONFIGURATION="android.emu.release" DETOX_LOGLEVEL="trace" DETOX_REPORT_SPECS=true DETOX_START_TIMESTAMP=1663526361217 DETOX_USE_CUSTOM_LOGGER=true jest --config D:\Myapp\app\mobile\e2e\config.json --testNamePattern "^((?!:ios:).)*$" e2e 00:24:26.614 detox[19864] TRACE: [DETOX_CREATE] created a Detox instance with config: { appsConfig: { default: { type: 'android.apk', binaryPath: 'android/app/build/outputs/apk/release/app-release.apk', testBinaryPath: 'android/app/build/outputs/apk/androidTest/release/app-release-androidTest.apk', build: 'cd android && gradlew clean && gradlew assembleRelease assembleAndroidTest -DtestBuildType=release && cd ..' } }, behaviorConfig: { init: { reinstallApp: true, exposeGlobals: true }, cleanup: { shutdownDevice: false }, launchApp: 'auto' }, cliConfig: { configuration: 'android.emu.release', loglevel: 'trace', useCustomLogger: true }, configurationName: 'android.emu.release', deviceConfig: { type: 'android.emulator', gpuMode: 'swiftshader_indirect', device: { avdName: '31_aosp' } }, runnerConfig: { testRunner: 'jest', runnerConfig: 'D:\\Myappp\\app\\mobile\\e2e\\config.json', specs: 'e2e', skipLegacyWorkersInjection: true }, sessionConfig: { autoStart: true, sessionId: '8a6a88d9-1e5b-5df3-6cda-4c90f05b1e3b', debugSynchronization: 10000 } } 00:24:26.625 detox[19864] DEBUG: [WSS_CREATE] Detox server listening on localhost:51367... 00:24:26.643 detox[19864] DEBUG: [WSS_CONNECTION, #51368] registered a new connection. 00:24:26.645 detox[19864] TRACE: [WS_OPEN] opened web socket to: ws://localhost:51367 00:24:26.647 detox[19864] TRACE: [WS_SEND] {"type":"login","params":{"sessionId":"8a6a88d9-1e5b-5df3-6cda-4c90f05b1e3b","role":"tester"},"messageId":0} 00:24:26.649 detox[19864] TRACE: [WSS_GET_FROM, #51368] {"type":"login","params":{"sessionId":"8a6a88d9-1e5b-5df3-6cda-4c90f05b1e3b","role":"tester"},"messageId":0} 00:24:26.650 detox[19864] TRACE: [SESSION_CREATED] created session 8a6a88d9-1e5b-5df3-6cda-4c90f05b1e3b 00:24:26.651 detox[19864] TRACE: [WSS_SEND_TO, #tester] {"type":"loginSuccess","params":{"testerConnected":true,"appConnected":false},"messageId":0} 00:24:26.652 detox[19864] TRACE: [SESSION_JOINED] tester joined session 8a6a88d9-1e5b-5df3-6cda-4c90f05b1e3b 00:24:26.653 detox[19864] TRACE: [WS_MESSAGE] {"type":"loginSuccess","params":{"testerConnected":true,"appConnected":false},"messageId":0} 00:24:26.754 detox[19864] DEBUG: [EXEC_CMD, #0] "C:\Users\PRADEEP\AppData\Local\Android\Sdk\emulator\emulator.EXE" -list-avds --verbose 00:24:26.826 detox[19864] TRACE: [EXEC_SUCCESS, #0] 31_aosp 31_api 31_playstore 00:24:26.827 detox[19864] DEBUG: [EXEC_CMD, #1] "C:\Users\PRADEEP\AppData\Local\Android\Sdk\emulator\emulator.EXE" -version 00:24:27.413 detox[19864] TRACE: [EXEC_SUCCESS, #1] INFO | Duplicate loglines will be removed, if you wish to see each indiviudal line launch with the -log-nofilter flag. Android emulator version 31.3.10.0 (build_id 8807927) (CL:N/A) Copyright (C) 2006-2017 The Android Open Source Project and many others. This program is a derivative of the QEMU CPU emulator (www.qemu.org). This software is licensed under the terms of the GNU General Public License version 2, as published by the Free Software Foundation, and may be copied, distributed, and modified under those terms. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. INFO | Android emulator version 31.3.10.0 (build_id 8807927) (CL:N/A) 00:24:27.416 detox[19864] DEBUG: [EMU_BIN_VERSION_DETECT] Detected emulator binary version { major: 31, minor: 3, patch: 10, toString: [Function: toString] } 00:24:27.416 detox[19864] DEBUG: [ALLOCATE_DEVICE] Trying to allocate a device based on "31_aosp" 00:24:27.419 detox[19864] DEBUG: [EXEC_CMD, #2] "C:\Users\PRADEEP\AppData\Local\Android\Sdk\platform-tools\adb.EXE" devices 00:24:27.520 detox[19864] DEBUG: [EXEC_SUCCESS, #2] List of devices attached 00:24:27.523 detox[19864] DEBUG: [ALLOCATE_DEVICE] Settled on emulator-12586 00:24:27.524 detox[19864] DEBUG: [SPAWN_CMD] C:\Users\PRADEEP\AppData\Local\Android\Sdk\emulator\emulator.EXE -verbose -no-audio -no-boot-anim -gpu swiftshader_indirect -port 12586 #31_aosp 00:26:26.610 detox[19864] ERROR: [DETOX_INIT_ERROR] DetoxRuntimeError: Aborted detox.init() execution, and now running detox.cleanup() HINT: Most likely, your test runner is tearing down the suite due to the timeout error at DetoxRuntimeErrorComposer.abortedDetoxInit (D:\Myapp\app\mobile\node_modules\detox\src\errors\DetoxRuntimeErrorComposer.js:13:12) at Detox.[_assertNoPendingInit] (D:\Myapp\app\mobile\node_modules\detox\src\Detox.js:238:48) at Detox.beforeEach (D:\Myapp\app\mobile\node_modules\detox\src\Detox.js:122:37) at DetoxExportWrapper.<computed> (D:\Myapp\app\mobile\node_modules\detox\src\DetoxExportWrapper.js:94:32) at DetoxAdapterImpl.beforeEach (D:\Myapp\app\mobile\node_modules\detox\runners\jest\DetoxAdapterImpl.js:28:22) at DetoxAdapterJasmine.beforeEach (D:\Myapp\app\mobile\node_modules\detox\runners\jest\DetoxAdapterJasmine.js:20:5) at Object.<anonymous> (D:\Myapp\app\mobile\e2e\init.ts:21:3) 00:26:26.617 detox[19864] TRACE: [ARTIFACTS_LIFECYCLE] artifactsManager.onBeforeCleanup() 00:26:26.622 detox[19864] DEBUG: [WSS_CLOSE] Detox server has been closed gracefully
TestContainer Rabbitmq seems to release connection as soon as it is start
I am using testcontainers in a spring boot project (version : 1.17.2) and I am trying to spin up a rabbitmq container. It seems like rabbitmq container starts up successfully, but it releases connection as soon as it is up. I can see some error in logs but after that I can see that the test container is started. I am kind of flummoxed as to why am I seeing this error and/or if the container is started or not ? Pasting excerpt from logs : 15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> * rabbitmq_management_agent 15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> * rabbitmq_web_dispatch 15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> * rabbitmq_management 15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> * rabbitmq_prometheus 15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> Server startup complete; 4 plugins started. 15:00:44.007 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.InternalHttpClient - ep-0000000C: cancel 15:00:44.007 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.DefaultManagedHttpClientConnection - http-outgoing-1: close connection IMMEDIATE 15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.InternalHttpClient - ep-0000000C: endpoint closed 15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.InternalHttpClient - ep-0000000C: discarding endpoint 15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager - ep-0000000C: releasing endpoint 15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager - ep-0000000C: connection is not kept alive 15:00:44.008 [docker-java-stream--540868428] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.wire - http-outgoing-1 << "end of stream" 15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager - ep-0000000C: connection released [route: {}->npipe://localhost:2375][total available: 0; route allocated: 0 of 2147483647; total allocated: 0 of 2147483647] 15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.wire - http-outgoing-1 << "[read] I/O error: java.nio.channels.ClosedChannelException" 15:00:44.008 [docker-java-stream--540868428] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.wire - http-outgoing-1 << "[read] I/O error: java.nio.channels.ClosedChannelException" 15:00:44.008 [docker-java-stream--540868428] DEBUG com.github.dockerjava.zerodep.ApacheDockerHttpClientImpl$ApacheResponse - Failed to close the response java.io.IOException: java.nio.channels.ClosedChannelException at java.base/java.nio.channels.Channels$2.read(Channels.java:240) at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.LoggingInputStream.read(LoggingInputStream.java:81) at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:149) at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280) at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:261) at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:222) at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:147) at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.close(ChunkedInputStream.java:314) at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.io.Closer.close(Closer.java:48) at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.IncomingHttpEntity.close(IncomingHttpEntity.java:111) at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.io.entity.HttpEntityWrapper.close(HttpEntityWrapper.java:120) at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.io.Closer.close(Closer.java:48) at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.message.BasicClassicHttpResponse.close(BasicClassicHttpResponse.java:93) at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.CloseableHttpResponse.close(CloseableHttpResponse.java:200) at com.github.dockerjava.zerodep.ApacheDockerHttpClientImpl$ApacheResponse.close(ApacheDockerHttpClientImpl.java:256) at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.lambda$executeAndStream$1(DefaultInvocationBuilder.java:277) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: java.nio.channels.ClosedChannelException: null .................................... 15:00:44.009 [main] INFO 🐳 [rabbitmq:3.9.13-management-alpine] - Container rabbitmq:3.9.13-management-alpine started in PT7.8752359S Config Java : public abstract class RabbitMqTestContainerConfiguration { private static final int RABBITMQ_DEFAULT_PORT = 5672; #Container public static RabbitMQContainer rabbitMQContainer = new RabbitMQContainer("rabbitmq:3.9.13-management-alpine") .withExposedPorts(RABBITMQ_DEFAULT_PORT).withStartupTimeout(Duration.ofMinutes(3)); public static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> { #Override public void initialize(ConfigurableApplicationContext configurableApplicationContext) { TestPropertySourceUtils.addInlinedPropertiesToEnvironment(configurableApplicationContext, "spring.rabbitmq.host=" + rabbitMQContainer.getHost(), "spring.rabbitmq.port=" + rabbitMQContainer.getMappedPort(RABBITMQ_DEFAULT_PORT), "spring.rabbitmq.username=" + rabbitMQContainer.getAdminUsername(), "spring.rabbitmq.password=" + rabbitMQContainer.getAdminPassword()); } } } ```
Media player aborts automatically (player-status-change -> status = "aborted") for an unknow reason
Integration was working perfectly for weeks but we now have a Media player status change happening without reason: 07:46:15:137 Agora-SDK [DEBUG]: Player 0 audio Status Changed Detected by Timer: init=>aborted Chrome console extract See below logs for more details: onmessage # socket.js:40 EventTarget.dispatchEvent # sockjs.js:170 (anonymous) # sockjs.js:887 SockJS._transportMessage # sockjs.js:885 EventEmitter.emit # sockjs.js:86 WebSocketTransport.ws.onmessage # sockjs.js:2961 wrapFn # zone-evergreen.js:1191 invokeTask # zone-evergreen.js:391 runTask # zone-evergreen.js:168 invokeTask # zone-evergreen.js:465 invokeTask # zone-evergreen.js:1603 globalZoneAwareCallback # zone-evergreen.js:1629 client:185 ./node_modules/pdfjs-dist/build/pdf.js Module not found: Error: Can't resolve 'zlib' in 'C:\codeRep\aboard\node_modules\pdfjs-dist\build' warnings # client:185 onmessage # socket.js:40 EventTarget.dispatchEvent # sockjs.js:170 (anonymous) # sockjs.js:887 SockJS._transportMessage # sockjs.js:885 EventEmitter.emit # sockjs.js:86 WebSocketTransport.ws.onmessage # sockjs.js:2961 wrapFn # zone-evergreen.js:1191 invokeTask # zone-evergreen.js:391 runTask # zone-evergreen.js:168 invokeTask # zone-evergreen.js:465 invokeTask # zone-evergreen.js:1603 globalZoneAwareCallback # zone-evergreen.js:1629 meeting-board loaded 07:46:11:960 Agora-SDK [INFO]: Creating client, MODE: interop CODEC: vp8 07:46:11:962 Agora-SDK [INFO]: [23D44] Initializing AgoraRTC client, appId: 1790a8792d1e4ff9b1718dc756710f54. 07:46:11:963 Agora-SDK [INFO]: [23D44] Adding event handler on connection-state-change 07:46:11:965 Agora-SDK [INFO]: processId: process-79c97042-927c-45f6-83c0-48a79f7ecfbc 07:46:11:966 Agora-SDK [DEBUG]: Flush cached event reporting: 6 07:46:12:132 Agora-SDK [INFO]: [23D44] Added event handler on connection-state-change, network-quality 07:46:12:220 Agora-SDK [INFO]: [23D44] Adding event handler on connection-state-change 07:46:12:264 Agora-SDK [INFO]: [23D44] Added event handler on connection-state-change, network-quality 07:46:12:445 Agora-SDK [DEBUG]: Get UserAccount Successfully {uid: 10014, url: "https://sua-ap-web-1.agora.io/api/v1"} 07:46:12:445 Agora-SDK [DEBUG]: getUserAccount Success https://sua-ap-web-1.agora.io/api/v1 JXL5oSGFHauvWRdQVouA => 10014 07:46:12:453 Agora-SDK [DEBUG]: [23D44] Connect to choose_server: https://webrtc2-ap-web-1.agora.io/api/v1 07:46:12:469 Agora-SDK [DEBUG]: [23D44] Connect to choose_server: https://webrtc2-ap-web-2.agoraio.cn/api/v1 07:46:12:732 Agora-SDK [DEBUG]: [23D44] Get gateway address: (3) ["148-153-25-164.edge.agora.io:5886", "128-1-77-69.edge.agoraio.cn:5887", "128-1-78-92.edge.agora.io:5891"] 07:46:12:733 Agora-SDK [INFO]: [23D44] Joining channel: yqYjJu7a1slX8T9CxfNr 07:46:12:734 Agora-SDK [DEBUG]: [23D44] setParameter in distribution: {"event_uuid":"123"} 07:46:12:734 Agora-SDK [DEBUG]: [23D44] register client Channel yqYjJu7a1slX8T9CxfNr Uid 10014 07:46:12:735 Agora-SDK [DEBUG]: [23D44] start connect:148-153-25-164.edge.agora.io:5886 07:46:12:847 Agora-SDK [DEBUG]: [23D44] websockect opened: 148-153-25-164.edge.agora.io:5886 07:46:12:849 Agora-SDK [DEBUG]: [23D44] Connected to gateway server 07:46:12:850 Agora-SDK [DEBUG]: Turn config {mode: "auto", username: "test", credential: "111111", forceturn: false, url: "148-153-25-164.edge.agora.io", …} 07:46:12:930 Agora-SDK [INFO]: [23D44] Join channel yqYjJu7a1slX8T9CxfNr success, join with uid: JXL5oSGFHauvWRdQVouA. 07:46:12:932 Agora-SDK [DEBUG]: Create stream 07:46:12:936 Agora-SDK [DEBUG]: [JXL5oSGFHauvWRdQVouA] Requested access to local media 07:46:12:936 Agora-SDK [DEBUG]: GetUserMedia {"audio":{}} 07:46:12:937 Agora-SDK [INFO]: [23D44] Adding event handler on error 07:46:12:951 Agora-SDK [INFO]: [23D44] Added event handler on error, player-status-change, stream-added, stream-subscribed, stream-removed, peer-leave 07:46:13:132 Agora-SDK [DEBUG]: [JXL5oSGFHauvWRdQVouA] User has granted access to local media accessAllowed getUserMedia successfully 07:46:13:134 Agora-SDK [DEBUG]: [JXL5oSGFHauvWRdQVouA] play(). agora_local undefined 07:46:13:137 Agora-SDK [INFO]: [23D44] Publishing stream, uid JXL5oSGFHauvWRdQVouA 07:46:13:138 Agora-SDK [DEBUG]: [23D44] setClientRole to host 07:46:13:138 Agora-SDK [INFO]: [23D44] Adding event handler on stream-published 07:46:13:145 Agora-SDK [INFO]: [23D44] Added event handler on stream-published 07:46:13:172 Agora-SDK [DEBUG]: [23D44]Created webkitRTCPeerConnnection with config "{"iceServers":[{"url":"stun:webcs.agora.io:3478"},{"username":"test","credential":"111111","credentialType":"password","urls":"turn:148-153-25-164.edge.agora.io:5916?transport=udp"},{"username":"test","credential":"111111","credentialType":"password","urls":"turn:148-153-25-164.edge.agora.io:5916?transport=tcp"}],"sdpSemantics":"plan-b"}". 07:46:13:174 Agora-SDK [DEBUG]: [23D44] PeerConnection add stream : MediaStream {id: "SvvMFhU4Lx0jFjXTAK0DxYrmOLHyexu6dNKM", active: true, onaddtrack: null, onremovetrack: null, onactive: null, …} 07:46:13:190 Agora-SDK [DEBUG]: [23D44]srflx candidate : null relay candidate: null host candidate : a=candidate:2681806481 1 udp 2122262783 2a02:2788:2b4:58d:681b:8505:ebc2:9c35 50241 typ host generation 0 network-id 2 network-cost 10 07:46:13:197 Agora-SDK [DEBUG]: [23D44] SDP exchange in publish : send offer -- {messageType: "OFFER", sdp: "v=0 ↵o=- 8826689166328964007 2 IN IP4 127.0.0.1 ↵s…7242 label:9a3f62cf-5bd4-477a-b7b3-1fca64d6c2ca ↵", offererSessionId: 104, seq: 1, tiebreaker: 72973729} 07:46:13:218 Agora-SDK [INFO]: [23D44] Local stream published with uid JXL5oSGFHauvWRdQVouA 07:46:13:218 Agora-SDK [DEBUG]: [23D44] SDP exchange in publish : receive answer -- {answererSessionId: 106, messageType: "ANSWER", offererSessionId: 104, sdp: "v=0 ↵o=- 0 0 IN IP4 127.0.0.1 ↵s=AgoraGateway ↵t=0…bel:pa99gEbWHL ↵a=ssrc:44444 label:pa99gEbWHLa0 ↵", seq: 1} 07:46:13:250 Agora-SDK [DEBUG]: [23D44] publish p2p connected: Map(1) {3292 => {…}} 07:46:13:251 Agora-SDK [DEBUG]: Flush cached event reporting: 4 07:46:13:251 Agora-SDK [INFO]: [23D44] Publish success, uid: JXL5oSGFHauvWRdQVouA Publish local stream successfully 07:46:15:137 Agora-SDK [DEBUG]: Player 0 audio Status Changed Detected by Timer: init=>aborted 07:46:15:138 Agora-SDK [DEBUG]: Media Player Status Change {type: "player-status-change", playerId: 0, mediaType: "audio", status: "aborted", reason: "timer", …} 07:46:15:168 Agora-SDK [DEBUG]: Media Player Status Change {type: "player-status-change", playerId: 0, mediaType: "video", status: "play", reason: "playing", …} Anyone ever experienced this? Any hint? Thanks Received an answer from Agora: Typically, this is because of the autoplay policy, you can find more information on how to bypass here: enter link description here I'll give it a try and update the post
RabbitMQ messages are not consummed
I would like to use RabbitMQ to send messages from a webapp backend to a second module. On my laptop, it works, but when I deploy the application on a VPS, even in dev mode, it doesn't work anymore... Could you please help me solve this out? Current status : If I check the queues on the VPS where both modules are installed, then, it looks ok (messages are added in the queue) $ rabbitmqctl list_queues Timeout: 60.0 seconds ... Listing queues for vhost / ... MyMessages 2 When I launch the second module, I get following log : Waiting for a request on queue : MyMessages, hosted at localhost Comming from the following java code : public static void main(String[] args) throws IOException, TimeoutException { RabbitMQConsumer rabbitMQConsumer = new RabbitMQConsumer(); rabbitMQConsumer.waitForRequests(); System.out.println("Waiting for a request on queue : " + AppConfig.QUEUE_NAME + ", hosted at " + AppConfig.QUEUE_HOST); } public RabbitMQConsumer() throws IOException, TimeoutException { mapper = new ObjectMapper(); ConnectionFactory connectionFactory = new ConnectionFactory(); connectionFactory.setHost(AppConfig.QUEUE_HOST); Connection connection = connectionFactory.newConnection(); channel = connection.createChannel(); } public void waitForRequests() throws IOException { DefaultConsumer consumer = new DefaultConsumer(channel) { #Override public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException { try { System.out.println("Message received ! "); channel.basicAck(envelope.getDeliveryTag(), false); } catch (Exception e) { e.printStackTrace(); } } }; channel.queueDeclare(AppConfig.QUEUE_NAME, true, false, false, null); channel.basicConsume(AppConfig.QUEUE_NAME, consumer); } I think both modules are looking at the same queue, there are messages in the quue, so... to me, it looks like messages are not consummed... I've looked at the status of rabbitMQ, but I do not know how to use it : $ invoke-rc.d rabbitmq-server status ● rabbitmq-server.service - RabbitMQ broker Loaded: loaded (/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2018-04-07 18:24:59 CEST; 1h 38min ago Process: 17103 ExecStop=/usr/lib/rabbitmq/bin/rabbitmqctl shutdown (code=exited, status=0/SUCCESS) Main PID: 17232 (beam.smp) Status: "Initialized" Tasks: 84 (limit: 4915) CGroup: /system.slice/rabbitmq-server.service ├─17232 /usr/lib/erlang/erts-9.3/bin/beam.smp -W w -A 64 -P 1048576 -t 5000000 -stbt db -zdbbl 1280000 -K true -- -root /usr/lib/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/r abbitmq/lib/rabbitmq_server-3.7.4/ebin -noshell -noinput -s rabbit boot -sname rabbit#vps5322 -boot start_sasl -kernel inet_default_connect_options [{nodelay,true}] -sasl errlog_type error -sasl sasl_err or_logger false -rabbit lager_log_root "/var/log/rabbitmq" -rabbit lager_default_file "/var/log/rabbitmq/rabbit#vps5322.log" -rabbit lager_upgrade_file "/var/log/rabbitmq/rabbit#vps5322_upgrade.log" -r abbit enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/lib/rabbitmq/plugins:/usr/lib/rabbitmq/lib/rabbitmq_server-3.7.4/plugins" -rabbit plugins_expand_dir "/var/lib/rabbitmq/ mnesia/rabbit#vps5322-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/var/lib/rabbitmq/mnesia/rabbit#vps5322" -kernel inet_dist_listen_m in 25672 -kernel inet_dist_listen_max 25672 ├─17319 /usr/lib/erlang/erts-9.3/bin/epmd -daemon ├─17453 erl_child_setup 1024 ├─17475 inet_gethost 4 └─17476 inet_gethost 4 Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: ## ## Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: ## ## RabbitMQ 3.7.4. Copyright (C) 2007-2018 Pivotal Software, Inc. Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: ########## Licensed under the MPL. See http://www.rabbitmq.com/ Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: ###### ## Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: ########## Logs: /var/log/rabbitmq/rabbit#vps5322.log Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: /var/log/rabbitmq/rabbit#vps5322_upgrade.log Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: Starting broker... Apr 07 18:24:59 vps5322 rabbitmq-server[17232]: systemd unit for activation check: "rabbitmq-server.service" Apr 07 18:24:59 vps5322 systemd[1]: Started RabbitMQ broker. Apr 07 18:24:59 vps5322 rabbitmq-server[17232]: completed with 0 plugins. Finally, note that the webapp application is a PlayFramework app, with these dependencies : libraryDependencies ++= Seq( guice, "com.rabbitmq" % "amqp-client" % "5.2.0" ) Whereas the second module is a pure java code, based on maven, with the following pom : <dependency> <groupId>com.rabbitmq</groupId> <artifactId>amqp-client</artifactId> <version>5.2.0</version> </dependency> Any idea of the problem? Thank you very much !!
Finally I've found the problem. This configuration is actually working, but I could not see it because of a crash in my own app that was not logged because of an error in my log4J configuration. Just in case, the error I had was that a local library included in my pom with a relative path (${project.basedir}) was found by my IDE but not anymore once deployed on a VPS. To solve this, I've just moved this (hopefully) very small library directly into my project. After solving this issue, I had to reset rabbitMQ and then it was all fine : rabbitmqctl stop_app rabbitmqctl reset rabbitmqctl start_app Thank you very much, Regards,
RabbitMQ crashing during restart
When I try to start my rabbitmq server, I get following error in log/*-sasl.log =CRASH REPORT==== 10-Apr-2013::23:24:32 === crasher: initial call: application_master:init/4 pid: <0.69.0> registered_name: [] exception exit: {bad_return, {{rabbit,start,[normal,[]]}, {'EXIT', {undef, [{os_mon_mib,module_info,[attributes],[]}, {rabbit_misc,module_attributes,1, [{file,"src/rabbit_misc.erl"}, {line,760}]}, {rabbit_misc, '-all_module_attributes/1-fun-0-',3, [{file,"src/rabbit_misc.erl"}, {line,779}]}, {lists,foldl,3, [{file,"lists.erl"},{line,1197}]}, {rabbit,boot_steps,0, [{file,"src/rabbit.erl"},{line,441}]}, {rabbit,start,2, [{file,"src/rabbit.erl"},{line,356}]}, {application_master,start_it_old,4, [{file,"application_master.erl"}, {line,274}]}]}}}} in function application_master:init/4 (application_master.erl, line 138) ancestors: [<0.68.0>] messages: [{'EXIT',<0.70.0>,normal}] links: [<0.68.0>,<0.7.0>] dictionary: [] trap_exit: true status: running heap_size: 2584 stack_size: 24 reductions: 227 neighbours: startup_log has - {error_logger,{{2013,4,11},{2,48,36}},std_error,"File operation error: eacces. Target: /usr/lib64/erlang/lib/otp_mibs-1.0.7/ebin. Function: read_file_info. Process: code_server."} =ERROR REPORT==== 11-Apr-2013::02:48:36 === File operation error: eacces. Target: /usr/lib64/erlang/lib/otp_mibs-1.0.7/ebin. Function: read_file_info. Process: code_server. Activating RabbitMQ plugins ... =ERROR REPORT==== 11-Apr-2013::02:48:36 === File operation error: eacces. Target: /usr/lib64/erlang/lib/otp_mibs-1.0.7. Function: list_dir. Process: application_controller. =ERROR REPORT==== 11-Apr-2013::02:48:36 === File operation error: eacces. Target: /usr/lib64/erlang/lib/otp_mibs-1.0.7. Function: list_dir. Process: application_controller. Skipping /usr/lib64/erlang/lib/os_mon-2.2.8/ebin/os_mon_mib.beam (unreadable) =ERROR REPORT==== 11-Apr-2013::02:48:36 === File operation error: eacces. Target: /usr/lib64/erlang/lib/otp_mibs-1.0.7. Function: list_dir. Process: systools_make. =ERROR REPORT==== 11-Apr-2013::02:48:36 === File operation error: eacces. Target: /usr/lib64/erlang/lib/otp_mibs-1.0.7. Function: list_dir. Process: systools_make. startup_err has - Erlang has closed Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{undef,[{os_mon_mib,module_info,[attributes],[]},{rabbit_misc,module Any help to understand the cause of the error would be great. Thanks.