I am using custom linux board and taken latest code of otbr-agent.
Also taken latest code of ot-nrf528xx for NRF52840.
otbr-agent is able to communicate with RCP successfully and my openthread network is created as well.
But randomly it files following error and exits:
otbr-agent[14116]: 00:35:22.736 [WARN]-PLAT----: radio tx timeout
otbr-agent[14116]: 00:35:22.736 [CRIT]-PLAT----: HandleRcpTimeout() at
/usr/src/debug/otbr/git-r0/ot-br-posix/third_party/openthread/repo/src/lib/spinel
/radio_spinel_impl.hpp:2218: RadioSpinelNoResponse
Full logs of otbr-agent from start are as below and it was exited without any activity.
Once I was able to commission and communicate with device and after it got exited with same error.
Is it issue from otbr? or rcp?
#/usr/sbin/otbr-agent -I wpan0 -B wlan0 spinel+hdlc+uart:///dev/ttymxc0 trel://wlan0 -v
otbr-agent[14116]: [INFO]-UTILS---: Running 0.3.0-fe1263578-dirty
otbr-agent[14116]: [INFO]-UTILS---: Thread version: 1.2.0
otbr-agent[14116]: [INFO]-UTILS---: Thread interface: wpan0
otbr-agent[14116]: [INFO]-UTILS---: Backbone interface: wlan0
otbr-agent[14116]: [INFO]-UTILS---: Radio URL: spinel+hdlc+uart:///dev/ttymxc0
otbr-agent[14116]: [INFO]-UTILS---: Radio URL: trel://wlan0
otbr-agent[14116]: 49d.18:38:21.580 [INFO]-PLAT----: RCP reset: RESET_POWER_ON
otbr-agent[14116]: 49d.18:38:21.609 [NOTE]-PLAT----: RCP API Version: 5
otbr-agent[14116]: 00:00:00.073 [INFO]-CORE----: [settings] Read NetworkInfo {rloc:0xe000, extaddr:ae12db553a8f7115, role:leader, mode:0x0f, version:3,
keyseq:0x0, ...
otbr-agent[14116]: 00:00:00.075 [INFO]-CORE----: [settings] ... pid:0x54beb0f8, mlecntr:0x1f9ed, maccntr:0x1f7f2, mliid:7c75ca665c72a43b}
otbr-agent[14116]: 00:00:00.146 [INFO]-CORE----: [settings] Read OmrPrefix fd7a:10e5:333a:5b12::/64
otbr-agent[14116]: 00:00:00.150 [INFO]-CORE----: [settings] Read OnLinkPrefix fd2f:7c27:62f6:0::/64
otbr-agent[14116]: 00:00:00.158 [INFO]-BR------: Infra interface (7) state changed: NOT RUNNING -> RUNNING
otbr-agent[14116]: [INFO]-AGENT---: Set state callback: OK
otbr-agent[14116]: 00:00:00.159 [INFO]-SRP-----: [server] selected port 53535
otbr-agent[14116]: 00:00:00.173 [INFO]-N-DATA--: Publisher: Publishing DNS/SRP service unicast (ml-eid, port:53535)
otbr-agent[14116]: 00:00:00.174 [INFO]-N-DATA--: Publisher: DNS/SRP service - State: NoEntry -> ToAdd
otbr-agent[14116]: [INFO]-AGENT---: Stop Thread Border Agent
otbr-agent[14116]: [INFO]-ADPROXY-: Stopped
otbr-agent[14116]: [INFO]-AGENT---: Initialize OpenThread Border Router Agent: OK
otbr-agent[14116]: [INFO]-UTILS---: Border router agent started.
otbr-agent[14116]: 00:00:00.202 [INFO]-CORE----: Notifier: StateChanged (0x101fc300) [KeySeqCntr NetData Channel PanId NetName ExtPanId NetworkKey PSKc
SecPolicy ...
otbr-agent[14116]: 00:00:00.213 [INFO]-CORE----: Notifier: StateChanged (0x101fc300) ... ActDset]
otbr-agent[14116]: 00:00:00.214 [INFO]-MLE-----: [announce-sender] ChannelMask:{ 11-26 }, period:21500
otbr-agent[14116]: 00:00:00.214 [INFO]-MLE-----: [announce-sender] StartingChannel:18
otbr-agent[14116]: 00:00:00.222 [INFO]-MLE-----: [announce-sender] StartingChannel:18
otbr-agent[14116]: 00:00:00.250 [INFO]-PLAT----: [netif] Host netif is down
otbr-agent[14116]: 00:00:00.262 [INFO]-PLAT----: [netif] Added multicast address ff02::1
otbr-agent[14116]: 00:00:00.262 [INFO]-PLAT----: [netif] Added multicast address ff03::1
otbr-agent[14116]: 00:00:00.263 [INFO]-PLAT----: [netif] Added multicast address ff03::fc
otbr-agent[14116]: 00:00:00.281 [INFO]-PLAT----: [netif] Sent request#1 to add fe80::ac12:db55:3a8f:7115/64
otbr-agent[14116]: 00:00:00.282 [NOTE]-MLE-----: Role disabled -> detached
otbr-agent[14116]: 00:00:00.297 [INFO]-PLAT----: [netif] Sent request#2 to add fd5d:e08d:c5ec:42fc:7c75:ca66:5c72:a43b/64
otbr-agent[14116]: 00:00:00.313 [INFO]-PLAT----: [netif] Added multicast address ff32:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:00.313 [INFO]-PLAT----: [netif] Added multicast address ff33:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:00.323 [INFO]-PLAT----: [netif] Sent request#3 to add fd5d:e08d:c5ec:42fc:0:ff:fe00:e000/64
otbr-agent[14116]: 00:00:00.323 [INFO]-MLE-----: Attempt to become router
otbr-agent[14116]: 00:00:00.325 [INFO]-CORE----: [settings] Read NetworkInfo {rloc:0xe000, extaddr:ae12db553a8f7115, role:leader, mode:0x0f, version:3,
keyseq:0x0, ...
otbr-agent[14116]: 00:00:00.327 [INFO]-CORE----: [settings] ... pid:0x54beb0f8, mlecntr:0x1f9ed, maccntr:0x1f7f2, mliid:7c75ca665c72a43b}
otbr-agent[14116]: 00:00:00.337 [INFO]-CORE----: [settings] Saved NetworkInfo {rloc:0xe000, extaddr:ae12db553a8f7115, role:leader, mode:0x0f, version:3,
keyseq:0x0, ...
otbr-agent[14116]: 00:00:00.345 [INFO]-CORE----: [settings] ... pid:0x54beb0f8, mlecntr:0x1fdd6, maccntr:0x1fbda, mliid:7c75ca665c72a43b}
otbr-agent[14116]: 00:00:00.345 [INFO]-MLE-----: Send Link Request (ff02:0:0:0:0:0:0:2)
otbr-agent[14116]: 00:00:00.345 [INFO]-CORE----: Notifier: StateChanged (0x0100103d) [Ip6+ Role LLAddr MLAddr Rloc+ Ip6Mult+ NetifState]
otbr-agent[14116]: 00:00:00.353 [INFO]-MLE-----: [announce-sender] Stopped
otbr-agent[14116]: 00:00:00.354 [NOTE]-PLAT----: [netif] Changing interface state to up.
otbr-agent[14116]: [INFO]-AGENT---: Thread is down
otbr-agent[14116]: [INFO]-AGENT---: Stop Thread Border Agent
otbr-agent[14116]: [INFO]-ADPROXY-: Stopped
otbr-agent[14116]: 00:00:00.475 [WARN]-PLAT----: [netif] Failed to transmit, error:Drop
otbr-agent[14116]: 00:00:00.539 [INFO]-CORE----: Notifier: StateChanged (0x00001000) [Ip6Mult+]
otbr-agent[14116]: 00:00:00.551 [INFO]-PLAT----: [trel] Interface address added successfully.
otbr-agent[14116]: 00:00:00.607 [INFO]-MAC-----: Sent IPv6 UDP msg, len:82, chksum:51e5, to:0xffff, sec:no, prio:net, radio:all
otbr-agent[14116]: 00:00:00.626 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:00:00.626 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:2]:19788
otbr-agent[14116]: 00:00:00.645 [NOTE]-PLAT----: [netif] ADD [U] fe80::ac12:db55:3a8f:7115 (already subscribed, ignored)
otbr-agent[14116]: 00:00:00.646 [WARN]-PLAT----: [netif] Unexpected address type (6).
otbr-agent[14116]: 00:00:00.646 [WARN]-PLAT----: [netif] Unexpected address type (8).
otbr-agent[14116]: 00:00:00.674 [INFO]-PLAT----: [netif] Succeeded to process request#1
otbr-agent[14116]: 00:00:00.714 [NOTE]-PLAT----: [netif] ADD [U] fd5d:e08d:c5ec:42fc:7c75:ca66:5c72:a43b (already subscribed, ignored)
otbr-agent[14116]: 00:00:00.714 [WARN]-PLAT----: [netif] Unexpected address type (6).
otbr-agent[14116]: 00:00:00.715 [WARN]-PLAT----: [netif] Unexpected address type (8).
otbr-agent[14116]: 00:00:00.760 [WARN]-PLAT----: [netif] Failed to transmit, error:Drop
otbr-agent[14116]: 00:00:00.760 [INFO]-PLAT----: [netif] Succeeded to process request#2
otbr-agent[14116]: 00:00:00.824 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::1:3
otbr-agent[14116]: 00:00:00.824 [NOTE]-PLAT----: [netif] ADD [U] fd5d:e08d:c5ec:42fc:0:ff:fe00:e000 (already subscribed, ignored)
otbr-agent[14116]: 00:00:00.825 [WARN]-PLAT----: [netif] Unexpected address type (6).
otbr-agent[14116]: 00:00:00.825 [WARN]-PLAT----: [netif] Unexpected address type (8).
otbr-agent[14116]: 00:00:00.825 [INFO]-PLAT----: [netif] Succeeded to process request#3
otbr-agent[14116]: 00:00:00.825 [INFO]-PLAT----: [netif] Host netif is up
otbr-agent[14116]: 00:00:01.220 [WARN]-PLAT----: [netif] Failed to transmit, error:Drop
otbr-agent[14116]: 00:00:01.222 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::1:3
otbr-agent[14116]: 00:00:01.222 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff33:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:01.223 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff32:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:01.223 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::fc
otbr-agent[14116]: 00:00:01.223 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::1
otbr-agent[14116]: 00:00:01.223 [INFO]-CORE----: Notifier: StateChanged (0x00001000) [Ip6Mult+]
otbr-agent[14116]: 00:00:02.157 [NOTE]-MLE-----: RLOC16 e000 -> fffe
otbr-agent[14116]: 00:00:02.163 [INFO]-PLAT----: [netif] Sent request#4 to remove fd5d:e08d:c5ec:42fc:0:ff:fe00:e000/64
otbr-agent[14116]: 00:00:02.165 [INFO]-MLE-----: AttachState Idle -> Start
otbr-agent[14116]: 00:00:02.166 [INFO]-CORE----: Notifier: StateChanged (0x10000040) [Rloc- ActDset]
otbr-agent[14116]: 00:00:02.181 [NOTE]-PLAT----: [netif] DEL [U] fd5d:e08d:c5ec:42fc:0:ff:fe00:e000 (not found, ignored)
otbr-agent[14116]: 00:00:02.181 [WARN]-PLAT----: [netif] Unexpected address type (6).
otbr-agent[14116]: 00:00:02.181 [WARN]-PLAT----: [netif] Unexpected address type (8).
otbr-agent[14116]: 00:00:02.182 [INFO]-PLAT----: [netif] Succeeded to process request#4
otbr-agent[14116]: 00:00:02.413 [NOTE]-MLE-----: Attempt to attach - attempt 1, any-partition reattaching with Active Dataset
otbr-agent[14116]: 00:00:02.413 [INFO]-MLE-----: AttachState Start -> ParentReqRouters
otbr-agent[14116]: 00:00:02.414 [INFO]-MLE-----: Send Parent Request to routers (ff02:0:0:0:0:0:0:2)
otbr-agent[14116]: 00:00:02.433 [INFO]-MAC-----: Sent IPv6 UDP msg, len:84, chksum:503d, to:0xffff, sec:no, prio:net, radio:all
otbr-agent[14116]: 00:00:02.434 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:00:02.434 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:2]:19788
otbr-agent[14116]: 00:00:03.164 [INFO]-MLE-----: AttachState ParentReqRouters -> ParentReqReeds
otbr-agent[14116]: 00:00:03.164 [INFO]-MLE-----: Send Parent Request to routers and REEDs (ff02:0:0:0:0:0:0:2)
otbr-agent[14116]: 00:00:03.183 [INFO]-MAC-----: Sent IPv6 UDP msg, len:84, chksum:3d1a, to:0xffff, sec:no, prio:net, radio:all
otbr-agent[14116]: 00:00:03.183 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:00:03.183 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:2]:19788
otbr-agent[14116]: 00:00:04.415 [INFO]-MLE-----: AttachState ParentReqReeds -> Idle
otbr-agent[14116]: 00:00:04.416 [NOTE]-MLE-----: Allocate router id 56
otbr-agent[14116]: 00:00:04.416 [NOTE]-MLE-----: RLOC16 fffe -> e000
otbr-agent[14116]: 00:00:04.427 [INFO]-PLAT----: [netif] Sent request#5 to add fd5d:e08d:c5ec:42fc:0:ff:fe00:e000/64
otbr-agent[14116]: 00:00:04.428 [NOTE]-MLE-----: Role detached -> leader
otbr-agent[14116]: 00:00:04.449 [INFO]-PLAT----: [netif] Sent request#6 to add fd5d:e08d:c5ec:42fc:0:ff:fe00:fc00/64
otbr-agent[14116]: 00:00:04.452 [INFO]-PLAT----: [netif] Added multicast address ff02::2
otbr-agent[14116]: 00:00:04.453 [INFO]-PLAT----: [netif] Added multicast address ff03::2
otbr-agent[14116]: 00:00:04.459 [NOTE]-MLE-----: Leader partition id 0x6f7040fb
otbr-agent[14116]: 00:00:04.459 [INFO]-CORE----: Notifier: StateChanged (0x100012a5) [Ip6+ Role Rloc+ PartitionId NetData Ip6Mult+ ActDset]
otbr-agent[14116]: 00:00:04.461 [INFO]-MLE-----: Send Data Response (ff02:0:0:0:0:0:0:1)
otbr-agent[14116]: 00:00:04.461 [INFO]-BBR-----: PBBR state: None
otbr-agent[14116]: 00:00:04.463 [INFO]-BBR-----: Domain Prefix: ::/0, state: None
otbr-agent[14116]: 00:00:04.473 [INFO]-CORE----: [settings] Saved NetworkInfo {rloc:0xe000, extaddr:ae12db553a8f7115, role:leader, mode:0x0f, version:3,
keyseq:0x0, ...
otbr-agent[14116]: 00:00:04.474 [INFO]-CORE----: [settings] ... pid:0x6f7040fb, mlecntr:0x1fdd9, maccntr:0x1fbda, mliid:7c75ca665c72a43b}
otbr-agent[14116]: 00:00:04.474 [INFO]-MLE-----: [announce-sender] Started
otbr-agent[14116]: 00:00:04.480 [INFO]-MESH-CP-: Border Agent start listening on port 0
otbr-agent[14116]: 00:00:04.481 [INFO]-BR------: Border Routing manager started
otbr-agent[14116]: 00:00:04.481 [INFO]-BR------: Start Router Solicitation, scheduled in 803 milliseconds
otbr-agent[14116]: 00:00:04.481 [INFO]-BR------: Start evaluating routing policy, scheduled in 162 milliseconds
otbr-agent[14116]: 00:00:04.481 [INFO]-N-DATA--: Publisher: DNS/SRP service (state:ToAdd) in netdata - total:0, preferred:0, desired:2
otbr-agent[14116]: 00:00:04.481 [INFO]-N-DATA--: Publisher: DNS/SRP service - State: ToAdd -> Adding
otbr-agent[14116]: 00:00:04.482 [INFO]-N-DATA--: Publisher: DNS/SRP service (state:Adding) - update in 2270 msec
otbr-agent[14116]: [INFO]-AGENT---: Thread is up
otbr-agent[14116]: [INFO]-AGENT---: Stop Thread Border Agent
otbr-agent[14116]: [INFO]-ADPROXY-: Stopped
otbr-agent[14116]: [INFO]-ADPROXY-: Started
otbr-agent[14116]: [INFO]-MDNS----: Avahi client state changed to 2.
otbr-agent[14116]: [INFO]-MDNS----: Avahi client ready.
otbr-agent[14116]: [INFO]-AGENT---: Publish meshcop service OpenThread Border Router._meshcop._udp.local.
otbr-agent[14116]: [INFO]-MDNS----: Avahi group change to state 0.
otbr-agent[14116]: [ERR ]-MDNS----: Group ready.
otbr-agent[14116]: [INFO]-MDNS----: Create service OpenThread Border Router._meshcop._udp for host localhost
otbr-agent[14116]: [INFO]-MDNS----: Commit service OpenThread Border Router._meshcop._udp
otbr-agent[14116]: [INFO]-ADPROXY-: Publish all hosts and services
otbr-agent[14116]: [INFO]-AGENT---: Start Thread Border Agent: OK
otbr-agent[14116]: 00:00:04.683 [NOTE]-PLAT----: [netif] ADD [U] fd5d:e08d:c5ec:42fc:0:ff:fe00:e000 (already subscribed, ignored)
otbr-agent[14116]: 00:00:04.684 [WARN]-PLAT----: [netif] Unexpected address type (6).
otbr-agent[14116]: 00:00:04.684 [WARN]-PLAT----: [netif] Unexpected address type (8).
otbr-agent[14116]: 00:00:04.695 [WARN]-PLAT----: [netif] Failed to transmit, error:Drop
otbr-agent[14116]: 00:00:04.695 [INFO]-PLAT----: [netif] Succeeded to process request#5
otbr-agent[14116]: 00:00:04.697 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::2
otbr-agent[14116]: 00:00:04.697 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::2
otbr-agent[14116]: 00:00:04.697 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::1:3
otbr-agent[14116]: 00:00:04.701 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff33:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:04.701 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff32:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:04.701 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::fc
otbr-agent[14116]: 00:00:04.706 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::1
otbr-agent[14116]: 00:00:04.707 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::16
otbr-agent[14116]: [INFO]-MDNS----: Avahi group change to state 1.
otbr-agent[14116]: [ERR ]-MDNS----: Group ready.
otbr-agent[14116]: 00:00:04.710 [INFO]-BR------: Evaluating routing policy
otbr-agent[14116]: 00:00:04.716 [INFO]-BR------: EvaluateOmrPrefix: No valid OMR prefixes found in Thread network
otbr-agent[14116]: 00:00:04.720 [INFO]-N-DATA--: Sent server data notification
otbr-agent[14116]: 00:00:04.720 [INFO]-BR------: Published local OMR prefix fd7a:10e5:333a:5b12::/64 in Thread network
otbr-agent[14116]: 00:00:04.727 [INFO]-BR------: Send OMR prefix fd7a:10e5:333a:5b12::/64 in RIO (valid lifetime = 1800 seconds)
otbr-agent[14116]: 00:00:04.729 [INFO]-BR------: Sent Router Advertisement on interface 7
otbr-agent[14116]: 00:00:04.730 [INFO]-BR------: Router advertisement scheduled in 16 seconds
otbr-agent[14116]: 00:00:04.731 [WARN]-PLAT----: [netif] Failed to transmit, error:Drop
otbr-agent[14116]: 00:00:04.737 [NOTE]-PLAT----: [netif] ADD [U] fd5d:e08d:c5ec:42fc:0:ff:fe00:fc00 (already subscribed, ignored)
otbr-agent[14116]: 00:00:04.737 [WARN]-PLAT----: [netif] Unexpected address type (6).
otbr-agent[14116]: 00:00:04.737 [WARN]-PLAT----: [netif] Unexpected address type (8).
otbr-agent[14116]: 00:00:04.740 [INFO]-N-DATA--: Received network data registration
otbr-agent[14116]: 00:00:04.741 [INFO]-N-DATA--: Allocated Context ID = 1
otbr-agent[14116]: 00:00:04.742 [INFO]-N-DATA--: Sent network data registration acknowledgment
otbr-agent[14116]: 00:00:04.743 [INFO]-BR------: Received Router Advertisement from fe80:0:0:0:ac12:db55:3a8f:7115 on interface 7
otbr-agent[14116]: 00:00:04.763 [INFO]-PLAT----: [netif] Succeeded to process request#6
otbr-agent[14116]: 00:00:04.772 [INFO]-CORE----: Notifier: StateChanged (0x00000200) [NetData]
otbr-agent[14116]: 00:00:04.772 [INFO]-MLE-----: Send Data Response (ff02:0:0:0:0:0:0:1)
otbr-agent[14116]: 00:00:04.772 [INFO]-BBR-----: PBBR state: None
otbr-agent[14116]: 00:00:04.773 [INFO]-BBR-----: Domain Prefix: ::/0, state: None
otbr-agent[14116]: 00:00:04.773 [INFO]-CORE----: [settings] Read SlaacIidSecretKey
otbr-agent[14116]: 00:00:04.773 [INFO]-UTIL----: SLAAC: Adding address fd7a:10e5:333a:5b12:572a:d02a:e7fb:a8ec
otbr-agent[14116]: 00:00:04.792 [INFO]-PLAT----: [netif] Sent request#7 to add fd7a:10e5:333a:5b12:572a:d02a:e7fb:a8ec/64
otbr-agent[14116]: 00:00:04.793 [INFO]-BR------: Start evaluating routing policy, scheduled in 191 milliseconds
otbr-agent[14116]: 00:00:04.793 [INFO]-N-DATA--: Publisher: DNS/SRP service (state:Adding) in netdata - total:0, preferred:0, desired:2
otbr-agent[14116]: 00:00:04.799 [INFO]-MAC-----: Sent IPv6 UDP msg, len:96, chksum:bf39, to:0xffff, sec:no, prio:net, radio:all
otbr-agent[14116]: 00:00:04.799 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:00:04.799 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:1]:19788
otbr-agent[14116]: 00:00:04.802 [INFO]-CORE----: Notifier: StateChanged (0x00000001) [Ip6+]
otbr-agent[14116]: 00:00:04.818 [WARN]-PLAT----: [netif] Failed to transmit, error:Drop
otbr-agent[14116]: 00:00:04.819 [NOTE]-PLAT----: [netif] ADD [U] fd7a:10e5:333a:5b12:572a:d02a:e7fb:a8ec (already subscribed, ignored)
otbr-agent[14116]: 00:00:04.819 [WARN]-PLAT----: [netif] Unexpected address type (6).
otbr-agent[14116]: 00:00:04.822 [WARN]-PLAT----: [netif] Unexpected address type (8).
otbr-agent[14116]: 00:00:04.839 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::2
otbr-agent[14116]: 00:00:04.840 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::2
otbr-agent[14116]: 00:00:04.841 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::1:3
otbr-agent[14116]: 00:00:04.842 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff33:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:04.843 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff32:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:04.848 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::fc
otbr-agent[14116]: 00:00:04.849 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::1
otbr-agent[14116]: 00:00:04.849 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::16
otbr-agent[14116]: 00:00:04.852 [INFO]-PLAT----: [netif] Succeeded to process request#7
otbr-agent[14116]: 00:00:04.872 [INFO]-MAC-----: Sent IPv6 UDP msg, len:118, chksum:b222, to:0xffff, sec:no, prio:net, radio:all
otbr-agent[14116]: 00:00:04.872 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:00:04.872 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:1]:19788
otbr-agent[14116]: 00:00:05.207 [INFO]-BR------: Evaluating routing policy
otbr-agent[14116]: 00:00:05.208 [INFO]-BR------: Send OMR prefix fd7a:10e5:333a:5b12::/64 in RIO (valid lifetime = 1800 seconds)
otbr-agent[14116]: 00:00:05.210 [INFO]-BR------: Sent Router Advertisement on interface 7
otbr-agent[14116]: 00:00:05.210 [INFO]-BR------: Router advertisement scheduled in 16 seconds
otbr-agent[14116]: 00:00:05.211 [INFO]-BR------: Received Router Advertisement from fe80:0:0:0:ac12:db55:3a8f:7115 on interface 7
otbr-agent[14116]: 00:00:05.284 [INFO]-BR------: Router solicitation times out
otbr-agent[14116]: 00:00:05.381 [INFO]-MLE-----: Send Advertisement (ff02:0:0:0:0:0:0:1)
otbr-agent[14116]: 00:00:05.399 [INFO]-MAC-----: Sent IPv6 UDP msg, len:90, chksum:83f4, to:0xffff, sec:no, prio:net, radio:all
otbr-agent[14116]: 00:00:05.405 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:00:05.409 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:1]:19788
otbr-agent[14116]: 00:00:05.540 [WARN]-PLAT----: [netif] Failed to transmit, error:Drop
otbr-agent[14116]: 00:00:05.558 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::2
otbr-agent[14116]: 00:00:05.573 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::2
otbr-agent[14116]: 00:00:05.573 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::1:3
otbr-agent[14116]: 00:00:05.573 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff33:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:05.574 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff32:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:05.580 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::fc
otbr-agent[14116]: 00:00:05.580 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::1
otbr-agent[14116]: 00:00:05.581 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::16
.
.
.
.
otbr-agent[14116]: 00:34:30.334 [INFO]-MAC-----: Sent IPv6 UDP msg, len:90, chksum:a5b1, to:0xffff, sec:no, prio:net, radio:all
otbr-agent[14116]: 00:34:30.335 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:34:30.338 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:1]:19788
otbr-agent[14116]: 00:34:34.259 [INFO]-MLE-----: Send Announce on channel 21
otbr-agent[14116]: 00:34:34.281 [INFO]-MAC-----: Sent IPv6 UDP msg, len:83, chksum:9a63, to:0xffff, sec:yes, prio:net, radio:all
otbr-agent[14116]: 00:34:34.282 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:34:34.282 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:1]:19788
otbr-agent[14116]: 00:34:55.946 [INFO]-MLE-----: Send Announce on channel 22
otbr-agent[14116]: 00:34:55.971 [INFO]-MAC-----: Sent IPv6 UDP msg, len:83, chksum:3dc6, to:0xffff, sec:yes, prio:net, radio:all
otbr-agent[14116]: 00:34:55.972 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:34:55.972 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:1]:19788
otbr-agent[14116]: 00:35:02.159 [WARN]-PLAT----: [netif] Failed to transmit, error:Drop
otbr-agent[14116]: 00:35:12.789 [INFO]-MLE-----: Send Advertisement (ff02:0:0:0:0:0:0:1)
otbr-agent[14116]: 00:35:12.807 [INFO]-MAC-----: Sent IPv6 UDP msg, len:90, chksum:daa6, to:0xffff, sec:no, prio:net, radio:all
otbr-agent[14116]: 00:35:12.814 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:35:12.814 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:1]:19788
otbr-agent[14116]: 00:35:17.734 [INFO]-MLE-----: Send Announce on channel 23
otbr-agent[14116]: 00:35:22.736 [WARN]-PLAT----: radio tx timeout
otbr-agent[14116]: 00:35:22.736 [CRIT]-PLAT----: HandleRcpTimeout() at /usr/src/debug/otbr/git-r0/ot-br-posix/third_party/openthread/repo/src/lib/spinel
/radio_spinel_impl.hpp:2218: RadioSpinelNoResponse
Looks like the prints are coming from the OTBR application. The reason for the bug is a problem in communications between your OTBR app and the RCP.
Related
My coturn server always fails on turn. I've tried much variants of config, but nothing works(
Server is not NATted, and have only public IP.
Using next config:
domain=sip.domain.ru
realm=sip.domain.ru
server-name=sip.domain.ru
#listening-ip=0.0.0.0
#external-ip=0.0.0.0
external-ip=213.232.207.000
external-ip=sip.domain.ru
listening-port=3478
min-port=10000
max-port=20000
fingerprint
log-file=/var/log/coturn/turnserver.log
verbose
user=DavidMaze:Password
lt-cred-mech
#allow-loopback-peers
web-admin
web-admin-ip=213.232.207.000
web-admin-port=8090
cert=/usr/share/coturn/server.crt
pkey=/usr/share/coturn/server.key
cipher-list="ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384"
While calling, there is waiting for 60s, then in logs:
0: log file opened: /var/log/coturn/turnserver_2023-01-13.log
0: pid file created: /run/turnserver/turnserver.pid
0: IO method (main listener thread): epoll (with changelist)
0: WARNING: I cannot support STUN CHANGE_REQUEST functionality because only one IP address is provided
0: Wait for relay ports initialization...
0: relay 213.232.207.000 initialization...
0: relay 213.232.207.000 initialization done
0: relay ::1 initialization...
0: relay ::1 initialization done
0: Relay ports initialization done
0: IO method (general relay thread): epoll (with changelist)
0: IO method (general relay thread): epoll (with changelist)
0: IO method (general relay thread): epoll (with changelist)
0: IO method (general relay thread): epoll (with changelist)
0: turn server id=0 created
0: IO method (general relay thread): epoll (with changelist)
0: IO method (general relay thread): epoll (with changelist)
0: turn server id=1 created
0: turn server id=3 created
0: turn server id=2 created
0: IPv4. TLS/SCTP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IO method (general relay thread): epoll (with changelist)
0: IPv4. TLS/SCTP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: turn server id=5 created
0: turn server id=4 created
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/SCTP listener opened on : 213.232.207.000:3478
0: IO method (general relay thread): epoll (with changelist)
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/SCTP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv6. TLS/SCTP listener opened on : ::1:3478
0: turn server id=6 created
0: turn server id=7 created
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IO method (general relay thread): epoll (with changelist)
0: IPv6. TLS/SCTP listener opened on : ::1:5349
0: IO method (general relay thread): epoll (with changelist)
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IO method (general relay thread): epoll (with changelist)
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IO method (general relay thread): epoll (with changelist)
0: IO method (general relay thread): epoll (with changelist)
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IO method (general relay thread): epoll (with changelist)
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: turn server id=9 created
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: turn server id=11 created
0: IO method (general relay thread): epoll (with changelist)
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: turn server id=14 created
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: turn server id=13 created
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IO method (general relay thread): epoll (with changelist)
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: turn server id=10 created
0: turn server id=15 created
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: turn server id=8 created
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: turn server id=12 created
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. DTLS/UDP listener opened on: 127.0.0.1:3478
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv4. DTLS/UDP listener opened on: 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv4. DTLS/UDP listener opened on: 213.232.207.000:3478
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv4. DTLS/UDP listener opened on: 213.232.207.000:5349
0: IPv6. DTLS/UDP listener opened on: ::1:3478
0: IPv6. DTLS/UDP listener opened on: ::1:5349
0: Total General servers: 16
0: IO method (auth thread): epoll (with changelist)
0: IO method (auth thread): epoll (with changelist)
0: IO method (auth thread): epoll (with changelist)
0: IO method (auth thread): epoll (with changelist)
0: IO method (auth thread): epoll (with changelist)
0: IO method (auth thread): epoll (with changelist)
0: IO method (auth thread): epoll (with changelist)
0: IO method (auth thread): epoll (with changelist)
0: IO method (admin thread): epoll (with changelist)
0: IPv4. TLS/SCTP listener opened on : 213.232.207.000:8090
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:8090
0: IPv4. web-admin listener opened on : 213.232.207.000:8090
0: SQLite DB connection success: /var/lib/turn/turndb
5: handle_udp_packet: New UDP endpoint: local addr 213.232.207.000:3478, remote addr 188.162.5.118:34297
5: session 010000000000000001: realm <sip.domain.ru> user <>: incoming packet BINDING processed, success
5: session 010000000000000001: realm <sip.domain.ru> user <>: incoming packet message processed, error 401: Unauthorized
5: IPv4. Local relay addr: 213.232.207.000:11050
5: session 010000000000000001: new, realm=<sip.domain.ru>, username=<DavidMaze>, lifetime=600
5: session 010000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet ALLOCATE processed, success
6: session 010000000000000001: peer 213.232.207.000 lifetime updated: 300
6: session 010000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet CREATE_PERMISSION processed, success
7: handle_udp_packet: New UDP endpoint: local addr 213.232.207.000:3478, remote addr 87.103.193.000:56186
7: session 006000000000000001: realm <sip.domain.ru> user <>: incoming packet BINDING processed, success
7: session 006000000000000001: realm <sip.domain.ru> user <>: incoming packet message processed, error 401: Unauthorized
7: IPv4. Local relay addr: 213.232.207.000:16236
7: session 006000000000000001: new, realm=<sip.domain.ru>, username=<DavidMaze>, lifetime=600
7: session 006000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet ALLOCATE processed, success
7: session 006000000000000001: peer 213.232.207.000 lifetime updated: 300
7: session 006000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet CREATE_PERMISSION processed, success
15: session 010000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet BINDING processed, success
17: session 006000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet BINDING processed, success
26: session 010000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet BINDING processed, success
27: session 006000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet BINDING processed, success
36: session 010000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet BINDING processed, success
38: session 006000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet BINDING processed, success
46: session 010000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet BINDING processed, success
47: handle_udp_packet: New UDP endpoint: local addr 213.232.207.000:3478, remote addr 188.162.5.118:23038
47: session 008000000000000001: realm <sip.domain.ru> user <>: incoming packet BINDING processed, success
48: session 008000000000000001: realm <sip.domain.ru> user <>: incoming packet message processed, error 401: Unauthorized
48: IPv4. Local relay addr: 213.232.207.000:16208
48: session 008000000000000001: new, realm=<sip.domain.ru>, username=<DavidMaze>, lifetime=600
48: session 008000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet ALLOCATE processed, success
48: session 006000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet BINDING processed, success
48: session 008000000000000001: peer 213.232.207.000 lifetime updated: 300
48: session 008000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet CREATE_PERMISSION processed, success
50: session 010000000000000001: refreshed, realm=<sip.domain.ru>, username=<DavidMaze>, lifetime=0
50: session 010000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet REFRESH processed, success
50: session 008000000000000001: refreshed, realm=<sip.domain.ru>, username=<DavidMaze>, lifetime=0
50: session 008000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet REFRESH processed, success
50: session 006000000000000001: refreshed, realm=<sip.domain.ru>, username=<DavidMaze>, lifetime=0
50: session 006000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet REFRESH processed, success
51: session 008000000000000001: usage: realm=<sip.domain.ru>, username=<DavidMaze>, rp=5, rb=364, sp=5, sb=508
51: session 008000000000000001: closed (2nd stage), user <DavidMaze> realm <sip.domain.ru> origin <>, local 213.232.207.000:3478, remote 188.162.5.118:23038, reason: allocation timeout
51: session 008000000000000001: delete: realm=<sip.domain.ru>, username=<DavidMaze>
51: session 008000000000000001: peer 213.232.207.000 deleted
51: session 010000000000000001: usage: realm=<sip.domain.ru>, username=<DavidMaze>, rp=10, rb=592, sp=10, sb=1032
51: session 010000000000000001: closed (2nd stage), user <DavidMaze> realm <sip.domain.ru> origin <>, local 213.232.207.000:3478, remote 188.162.5.118:34297, reason: allocation timeout
51: session 010000000000000001: delete: realm=<sip.domain.ru>, username=<DavidMaze>
51: session 010000000000000001: peer 213.232.207.000 deleted
51: session 006000000000000001: usage: realm=<sip.domain.ru>, username=<DavidMaze>, rp=58, rb=7500, sp=9, sb=892
51: session 006000000000000001: closed (2nd stage), user <DavidMaze> realm <sip.domain.ru> origin <>, local 213.232.207.000:3478, remote 87.103.193.000:56186, reason: allocation timeout
51: session 006000000000000001: delete: realm=<sip.domain.ru>, username=<DavidMaze>
51: session 006000000000000001: peer 213.232.207.000 deleted
Also, 2 days ago i was having 403: forbidden IP. But it was fixed by commenting listening-ip
Fixed issue. For others:
At first, check issue on different browsers. I've detected, that call works on Mozilla Firefox, while don't work on Chromium-based browsers;
You can enable extra-verbose mode by -V flag (uppercase) or --Verbose. This can help, but logs are very annoying and no need to see them in 95% times;
While testing TURN-server via very popular tool WebRTC sample - Trickle ICE, you can see authentication failed? with relay in next line. This might not be problem, check this with other working TURN-server (example)
Check client's firewall for blocking ports of STUN/TURN servers, for port ranges of TURN. That was my case, client's firewall was blocking 24000-64000 ports.
I'm passing a token with the authorization: bearer {token} header.
The {token} was just issued by the OpenIddict server which is using DataProtection() to create reference tokens.
The resource server is setup like this:
services.AddOpenIddict()
.AddValidation(options => {
options.SetIssuer(authenticationSettings.Issuer);
options.AddAudiences(resourceServerSettings.Name);
options.AddEventHandler<ValidateTokenContext>(builder => builder.UseScopedHandler<ValidateAccessTokenHandler>());
var encryptionCert = certificateSettings.IdentityEncryption.GetCertificate();
var signingCert = certificateSettings.IdentitySigning.GetCertificate();
options.AddEncryptionCertificate(encryptionCert);
options.AddEncryptionKey(new X509SecurityKey(signingCert));
options.UseDataProtection();
options.UseSystemNetHttp();
options.UseAspNetCore();
});
Which is directly picked up from Zirku sample although I'm not sure if I need to use the encryptioncertificate and key, or if just encryptioncertificate should be enough. The encryption certificate and encryption key are shared between the main server that issues the token and the resource server and I've verified that they're identical thumbprints. (same with the signingkey)
I'm getting the following in the logs:
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessRequestContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+InferIssuerFromHost.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessRequestContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+InferIssuerFromHost.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessAuthenticationContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+ResolveServerConfiguration.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessAuthenticationContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+ResolveServerConfiguration.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessAuthenticationContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+EvaluateValidatedTokens.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessAuthenticationContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+EvaluateValidatedTokens.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessAuthenticationContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+ExtractAccessTokenFromAuthorizationHeader.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessAuthenticationContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+ExtractAccessTokenFromAuthorizationHeader.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessAuthenticationContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+ExtractAccessTokenFromBodyForm.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessAuthenticationContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+ExtractAccessTokenFromBodyForm.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessAuthenticationContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+ExtractAccessTokenFromQueryString.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessAuthenticationContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+ExtractAccessTokenFromQueryString.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessAuthenticationContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+ValidateRequiredTokens.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessAuthenticationContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+ValidateRequiredTokens.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ValidateTokenContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+Protection+ResolveTokenValidationParameters.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ValidateTokenContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+Protection+ResolveTokenValidationParameters.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ValidateTokenContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+Protection+ValidateIdentityModelToken.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ValidateTokenContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+Protection+ValidateIdentityModelToken.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ValidateTokenContext was successfully processed by OpenIddict.Validation.DataProtection.OpenIddictValidationDataProtectionHandlers+Protection+ValidateDataProtectionToken.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ValidateTokenContext was successfully processed by OpenIddict.Validation.DataProtection.OpenIddictValidationDataProtectionHandlers+Protection+ValidateDataProtectionToken.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ValidateTokenContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+Protection+NormalizeScopeClaims.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ValidateTokenContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+Protection+NormalizeScopeClaims.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ValidateTokenContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+Protection+MapInternalClaims.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ValidateTokenContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+Protection+MapInternalClaims.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ValidateTokenContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+Protection+ValidatePrincipal.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ValidateTokenContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+Protection+ValidatePrincipal.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ValidateTokenContext was marked as rejected by OpenIddict.Validation.OpenIddictValidationHandlers+Protection+ValidatePrincipal.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ValidateTokenContext was marked as rejected by OpenIddict.Validation.OpenIddictValidationHandlers+Protection+ValidatePrincipal.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessAuthenticationContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+ValidateAccessToken.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessAuthenticationContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+ValidateAccessToken.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessAuthenticationContext was marked as rejected by OpenIddict.Validation.OpenIddictValidationHandlers+ValidateAccessToken.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessAuthenticationContext was marked as rejected by OpenIddict.Validation.OpenIddictValidationHandlers+ValidateAccessToken.
[2022-11-29 16:16:41] info: OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandler[7]
OpenIddict.Validation.AspNetCore was not authenticated. Failure message: An error occurred while authenticating the current request.
OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandler: Information: OpenIddict.Validation.AspNetCore was not authenticated. Failure message: An error occurred while authenticating the current request.
[2022-11-29 16:16:41] info: Microsoft.AspNetCore.Authorization.DefaultAuthorizationService[2]
Authorization failed. These requirements were not met:
DenyAnonymousAuthorizationRequirement: Requires an authenticated user.
Microsoft.AspNetCore.Authorization.DefaultAuthorizationService: Information: Authorization failed. These requirements were not met:
DenyAnonymousAuthorizationRequirement: Requires an authenticated user.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+ResolveHostChallengeProperties.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+ResolveHostChallengeProperties.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+AttachHostChallengeError.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+AttachHostChallengeError.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+AttachDefaultChallengeError.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+AttachDefaultChallengeError.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+AttachHttpResponseCode`1[[OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext, OpenIddict.Validation, Version=4.0.0.0, Culture=neutral, PublicKeyToken=35a561290d20de2f]].
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+AttachHttpResponseCode`1[[OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext, OpenIddict.Validation, Version=4.0.0.0, Culture=neutral, PublicKeyToken=35a561290d20de2f]].
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+AttachCustomChallengeParameters.
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext was successfully processed by OpenIddict.Validation.OpenIddictValidationHandlers+AttachCustomChallengeParameters.
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+AttachCacheControlHeader`1[[OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext, OpenIddict.Validation, Version=4.0.0.0, Culture=neutral, PublicKeyToken=35a561290d20de2f]].
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+AttachCacheControlHeader`1[[OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext, OpenIddict.Validation, Version=4.0.0.0, Culture=neutral, PublicKeyToken=35a561290d20de2f]].
[2022-11-29 16:16:41] dbug: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+AttachWwwAuthenticateHeader`1[[OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext, OpenIddict.Validation, Version=4.0.0.0, Culture=neutral, PublicKeyToken=35a561290d20de2f]].
OpenIddict.Validation.OpenIddictValidationDispatcher: Debug: The event OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext was successfully processed by OpenIddict.Validation.AspNetCore.OpenIddictValidationAspNetCoreHandlers+AttachWwwAuthenticateHeader`1[[OpenIddict.Validation.OpenIddictValidationEvents+ProcessChallengeContext, OpenIddict.Validation, Version=4.0.0.0, Culture=neutral, PublicKeyToken=35a561290d20de2f]].
[2022-11-29 16:16:41] info: OpenIddict.Validation.OpenIddictValidationDispatcher[0]
The response was successfully returned as a challenge response: {
"error": "invalid_token",
"error_description": "The specified token is invalid.",
"error_uri": "https://documentation.openiddict.com/errors/ID2004"
}.
OpenIddict.Validation.OpenIddictValidationDispatcher: Information: The response was successfully returned as a challenge response: {
"error": "invalid_token",
"error_description": "The specified token is invalid.",
"error_uri": "https://documentation.openiddict.com/errors/ID2004"
}.
I can't find anything in there that tells me why it failed or even how to intercept what's throwing it so that I can get any more information and this is with logging set to trace.
Any insight as to what I'm doing wrong?
I am trying to do channel.basicReject() to requeue message based on some condition by creating an MethodInterceptor ConsumerAdvice and adding it to SMLC factor.setAdviceChain(new ConsumerAdvice()). I also have concurrentConsumer configuration which is set to 10. The moment my reject condition is met I issue basicReject command and it gets redelivered and processed by another consumer. During this redelivery process I get the below error,
2019-11-07 17:34:13.268 ERROR 29385 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - unknown delivery tag 1, class-id=60, method-id=80)
2019-11-07 17:34:13.268 DEBUG 29385 --- [ool-2-thread-13] o.s.a.r.listener.BlockingQueueConsumer : Received shutdown signal for consumer tag=amq.ctag-HUaN71TZUqMfLDR7k6LwGQ
com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - unknown delivery tag 1, class-id=60, method-id=80)
at com.rabbitmq.client.impl.ChannelN.asyncShutdown(ChannelN.java:516)
at com.rabbitmq.client.impl.ChannelN.processAsync(ChannelN.java:346)
at com.rabbitmq.client.impl.AMQChannel.handleCompleteInboundCommand(AMQChannel.java:178)
at com.rabbitmq.client.impl.AMQChannel.handleFrame(AMQChannel.java:111)
at com.rabbitmq.client.impl.AMQConnection.readFrame(AMQConnection.java:670)
at com.rabbitmq.client.impl.AMQConnection.access$300(AMQConnection.java:48)
at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:597)
at java.lang.Thread.run(Thread.java:748)
My message is not getting lost but I am seeing a bunch of above errors and unable to understand why this is happening. If anyone has any clues please guide me.
Below are the trace logs,
2019-11-08 02:11:31.883 TRACE 8695 --- [askExecutor-138] o.s.a.r.c.CachingConnectionFactory : AMQChannel(amqp://guest#127.0.0.1:5672/,99) channel.getChannelNumber()
2019-11-08 02:11:31.883 INFO 8695 --- [askExecutor-138] c.g.s.w.consumer.advice.ArgumentUtils : Channel number before triggering redelivery : 99
2019-11-08 02:11:31.883 TRACE 8695 --- [askExecutor-138] o.s.a.r.c.CachingConnectionFactory : AMQChannel(amqp://guest#127.0.0.1:5672/,99) channel.basicReject([2, true])
2019-11-08 02:11:31.883 INFO 8695 --- [askExecutor-138] c.g.s.w.consumer.advice.ArgumentUtils : ==============================================================================
2019-11-08 02:11:31.883 INFO 8695 --- [askExecutor-138] c.g.s.w.consumer.advice.ConsumerAdvice : Requeue Message attempted, status : true
2019-11-08 02:11:31.884 TRACE 8695 --- [askExecutor-138] o.s.a.r.l.SimpleMessageListenerContainer : Waiting for message from consumer.
2019-11-08 02:11:31.884 TRACE 8695 --- [askExecutor-138] o.s.a.r.listener.BlockingQueueConsumer : Retrieving delivery for Consumer#7783912f: tags=[[amq.ctag-eY7LN-1pSXPX8FKRBgt-ug]], channel=Cached Rabbit Channel: AMQChannel(amqp://guest#127.0.0.1:5672/,99), conn: Proxy#37ffe4f3 Shared Rabbit Connection: SimpleConnection#708dfe10 [delegate=amqp://guest#127.0.0.1:5672/, localPort= 58638], acknowledgeMode=AUTO local queue size=0
2019-11-08 02:11:31.884 DEBUG 8695 --- [askExecutor-138] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it
com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - unknown delivery tag 1, class-id=60, method-id=80)
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.checkShutdown(BlockingQueueConsumer.java:436)
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.nextMessage(BlockingQueueConsumer.java:501)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doReceiveAndExecute(SimpleMessageListenerContainer.java:843)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.receiveAndExecute(SimpleMessageListenerContainer.java:832)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$700(SimpleMessageListenerContainer.java:78)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1073)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - unknown delivery tag 1, class-id=60, method-id=80)
at com.rabbitmq.client.impl.ChannelN.asyncShutdown(ChannelN.java:516)
at com.rabbitmq.client.impl.ChannelN.processAsync(ChannelN.java:346)
at com.rabbitmq.client.impl.AMQChannel.handleCompleteInboundCommand(AMQChannel.java:178)
at com.rabbitmq.client.impl.AMQChannel.handleFrame(AMQChannel.java:111)
at com.rabbitmq.client.impl.AMQConnection.readFrame(AMQConnection.java:670)
at com.rabbitmq.client.impl.AMQConnection.access$300(AMQConnection.java:48)
at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:597)
... 1 common frames omitted
2019-11-08 02:11:31.884 ERROR 8695 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - unknown delivery tag 2, class-id=60, method-id=90)
You need to show your code and configuration.
It seems like the SMLC has its default configuration to automatically acknowledge messages and this failure is because you already rejected it; why are you interacting with the channel directly?
You can simply throw an exception and the container will reject the message on your behalf.
I don't know if it will helpful for someone.
I had the same error because of invalid input data. But when I added next properties:
"rabbit.listener.acknowledgeMode": "MANUAL",
"rabbit.listener.defaultRequeueRejected": "true",
"rabbit.listener.prefetchCount": "1",
the problem stop to break my program but only had stopping my listener
I am trying to set up nginx to map TLS connections to different backends based on the SNI server name. From what I can tell, my client is sending the server name, but the preread module is only reading a hyphen.
Here is my nginx congif:
stream {
map_hash_bucket_size 64;
############################################################
### logging
log_format log_stream '$remote_addr [$time_local] $protocol [$ssl_preread_server_name] [$ssl_preread_alpn_protocols] [$instanceport] '
'$status $bytes_sent $bytes_received $session_time';
error_log /usr/home/glance/Logs/pservernginx.error.log info;
access_log /usr/home/glance/Logs/pservernginx.access.log log_stream;
############################################################
### ssl configuration
ssl_certificate /usr/home/glance/GlanceReleases/star.myglance.org.pem;
ssl_certificate_key /usr/home/glance/GlanceReleases/star.myglance.org.pem;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5:!RC4;
limit_conn_zone $binary_remote_addr zone=ip_addr:10m;
########################################################################
### Raw TLS PServer Connections
### Listen for TLS on 5501 and forward to TCP sock 6500 (socket port)
### https://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html
map $ssl_preread_server_name $instanceport {
presence.myglance.org 6500;
presence-1.myglance.org 6501;
presence-2.myglance.org 6502;
default glance-no-upstream-instance-configured;
}
server {
listen 5501 ssl;
ssl_preread on;
proxy_connect_timeout 20s; # max time to connect to pserver
proxy_timeout 30s; # max time between successive reads or writes
proxy_pass 127.0.0.1:$instanceport;
}
}
wireshark shows the Server Name header:
The nginx access log shows only hyphens for the preread variables:
108.49.96.66 [12/Apr/2019:11:50:58 +0000] TCP [-] [-] [glance-no-upstream-instance-configured] 500 0 0 0.066
I'm running nginx 1.14.2 on FreeBSD. How can I debug what is happening in the preread module?
================ UPDATE ===============
Turned on debug logging. Maybe "ssl preread: not a handshake" is a clue.
2019/04/12 14:49:50 [info] 61420#0: *9 client 108.49.96.66:54740 connected to 0.0.0.0:5501
2019/04/12 14:49:50 [debug] 61420#0: *9 posix_memalign: 0000000801C35000:256 #16
2019/04/12 14:49:50 [debug] 61419#0: accept on 0.0.0.0:5501, ready: 1
2019/04/12 14:49:50 [debug] 61419#0: accept() not ready (35: Resource temporarily unavailable)
2019/04/12 14:49:50 [debug] 61420#0: *9 posix_memalign: 0000000801C35600:256 #16
2019/04/12 14:49:50 [debug] 61420#0: *9 generic phase: 0
2019/04/12 14:49:50 [debug] 61420#0: *9 generic phase: 1
2019/04/12 14:49:50 [debug] 61420#0: *9 generic phase: 2
2019/04/12 14:49:50 [debug] 61420#0: *9 tcp_nodelay
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL_do_handshake: -1
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL_get_error: 2
2019/04/12 14:49:50 [debug] 61420#0: *9 kevent set event: 5: ft:-1 fl:0025
2019/04/12 14:49:50 [debug] 61420#0: *9 event timer add: 5: 60000:29203481224
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL handshake handler: 0
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL_do_handshake: 1
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD"
2019/04/12 14:49:50 [debug] 61420#0: *9 event timer del: 5: 29203481224
2019/04/12 14:49:50 [debug] 61420#0: *9 generic phase: 2
2019/04/12 14:49:50 [debug] 61420#0: *9 ssl preread handler
2019/04/12 14:49:50 [debug] 61420#0: *9 malloc: 0000000801CFF000:16384
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL_read: -1
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL_get_error: 2
2019/04/12 14:49:50 [debug] 61420#0: *9 ssl preread handler
2019/04/12 14:49:50 [debug] 61420#0: *9 posix_memalign: 0000000801C35900:256 #16
2019/04/12 14:49:50 [debug] 61420#0: *9 event timer add: 5: 30000:29203451252
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL_read: 81
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL_read: -1
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL_get_error: 2
2019/04/12 14:49:50 [debug] 61420#0: *9 ssl preread handler
2019/04/12 14:49:50 [debug] 61420#0: *9 ssl preread: not a handshake
2019/04/12 14:49:50 [debug] 61420#0: *9 event timer del: 5: 29203451252
2019/04/12 14:49:50 [debug] 61420#0: *9 proxy connection handler
2019/04/12 14:49:50 [debug] 61420#0: *9 malloc: 0000000801DF7000:400
2019/04/12 14:49:50 [debug] 61420#0: *9 malloc: 0000000801CD9000:16384
2019/04/12 14:49:50 [debug] 61420#0: *9 stream map started
2019/04/12 14:49:50 [debug] 61420#0: *9 stream map: "" "glance-no-upstream-instance-configured"
================= UPDATE 2 ======================
I tested using
openssl s_client -connect ... -servername ...
instead of my client. Now it appears that the preread module is blocked waiting for data for 30 seconds (error code 2 is WANT_READ):
2019/04/23 13:04:30 [debug] 61419#0: *12844 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD"
2019/04/23 13:04:30 [debug] 61419#0: *12844 event timer del: 3: 30147561850
2019/04/23 13:04:30 [debug] 61419#0: *12844 generic phase: 2
2019/04/23 13:04:30 [debug] 61419#0: *12844 ssl preread handler
2019/04/23 13:04:30 [debug] 61419#0: *12844 malloc: 0000000801CA6140:16384
2019/04/23 13:04:30 [debug] 61419#0: *12844 SSL_read: -1
2019/04/23 13:04:30 [debug] 61419#0: *12844 SSL_get_error: 2
2019/04/23 13:04:30 [debug] 61419#0: *12844 ssl preread handler
2019/04/23 13:04:30 [debug] 61419#0: *12844 posix_memalign: 0000000801DB3400:256 #16
2019/04/23 13:04:30 [debug] 61419#0: *12844 event timer add: 3: 30000:30147531898
2019/04/23 13:05:00 [debug] 61419#0: *12844 event timer del: 3: 30147531898
2019/04/23 13:05:00 [debug] 61419#0: *12844 finalize stream session: 200
2019/04/23 13:05:00 [debug] 61419#0: *12844 stream log handler
2019/04/23 13:05:00 [debug] 61419#0: *12844 stream map started
2019/04/23 13:05:00 [debug] 61419#0: *12844 stream script var: ""
I found the problem:
listen 5501 **ssl**;
ssl_preread on;
ssl in the listen directive caused that nginx server to do the ssl handshake. By the time the preread module was notified, the handshake bytes had already been consumed, which is all consistent with the behavior I was seeing. In my case, I still want nginx to offload the encryption. So I created a set of nginx server directives to terminate the ssl connection before passing to my back end.
This is the relevant portion of my nginx config after fixing it. Note that the last server directive (the one that uses ssl_preread) does not terminate the SSL connection.
########################################################################
### TLS Connections
### Listen for TLS on 5501 and forward to TCP sock 6500 (socket port)
### https://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html
map $ssl_preread_server_name $instanceport {
presence.myglance.org 5502;
presence-1.myglance.org 5503;
presence-2.myglance.org 5504;
default glance-no-upstream-instance-configured;
}
server {
listen 5502 ssl;
ssl_preread off;
proxy_pass 127.0.0.1:6502;
}
server {
listen 5503 ssl;
ssl_preread off;
proxy_pass 127.0.0.1:6503;
}
server {
listen 5504 ssl;
ssl_preread off;
proxy_pass 127.0.0.1:6504;
}
server {
listen 5501;
ssl_preread on;
proxy_connect_timeout 20s; # max time to connect to pserver
proxy_timeout 30s; # max time between successive reads or writes
proxy_pass 127.0.0.1:$instanceport;
}
In case you need to use ssl in listen directive, you can simply use $ssl_server_name in the map block instead of $ssl_preread_server_name
I have a synchronized Mule flow which reads messages from a sonic topic and publish to a Rabbit exchange.
I am loosing messages when the Rabbit is brought up/down.
Rabbit exchange is publishing to HA queues.
How can I make sure Mule is not consuming the message until proper "Ack" is received from Rabbit Broker?
Here is the flow.
<jms:connector name="sonicMQConnectorSub" validateConnections="true" connectionFactory-ref="factorySub" doc:name="JMS" clientId="testClient" durable="true" maxRedelivery="-1" >
<reconnect-forever frequency="30000"/>
</jms:connector>
<spring:beans>
<spring:bean id="soniqMQConnectionFactoryBeanSub" name="factorySub" class="progress.message.jclient.ConnectionFactory">
<spring:property name="connectionURLs" value="tcp://server1:7800" />
<spring:property name="defaultUser" value="user" />
<spring:property name="defaultPassword" value="pass" />
</spring:bean>
</spring:beans>
<amqp:connector name="AMQP" validateConnections="true" host="server2" fallbackAddresses="server3" doc:name="AMQP Connector" port="5672" mandatory="true" activeDeclarationsOnly="true">
<reconnect-forever frequency="30000"/>
</amqp:connector>
<flow name="rabbitFlow1" doc:name="rabbitFlow1" processingStrategy="synchronous">
<jms:inbound-endpoint doc:name="JMS" connector-ref="sonicMQConnectorSub" topic="testtopic"/>
<logger message="Message: #[message.payload]" level="INFO" doc:name="Logger"/>
<amqp:outbound-endpoint exchangeName="rabbitExchange" exchangeDurable="true" responseTimeout="10000" connector-ref="AMQP" doc:name="AMQP" exchangeType="fanout"/>
</flow>
Updated 04/22
Here is the exception trace when Mule is connecting to the 2nd broker. This is when I loose a message.
2014-04-22 09:49:29,453 - org.mule.exception.DefaultSystemExceptionStrategy - ERROR -
********************************************************************************
Message : Connection shutdown detected for: AMQP
Code : MULE_ERROR--2
--------------------------------------------------------------------------------
Exception stack is:
1. Software caused connection abort: recv failed (java.net.SocketException)
java.net.SocketInputStream:-2 (null)
2. connection error; reason: java.net.SocketException: Software caused connection abort: recv failed (com.rabbitmq.client.ShutdownSignalException)
com.rabbitmq.client.impl.AMQConnection:715 (null)
3. Connection shutdown detected for: AMQP (org.mule.transport.ConnectException)
org.mule.transport.amqp.AmqpConnector$1:502 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/transport/ConnectException.html)
--------------------------------------------------------------------------------
Root Exception stack trace:
java.net.SocketException: Software caused connection abort: recv failed
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:150)
at java.net.SocketInputStream.read(SocketInputStream.java:121)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
********************************************************************************
2014-04-22 09:49:29,453 - org.mule.exception.DefaultSystemExceptionStrategy - INFO - Exception caught is a ConnectException, attempting to reconnect...
2014-04-22 09:49:29,454 - org.mule.lifecycle.AbstractLifecycleManager - INFO - Stopping connector: AMQP
2014-04-22 09:49:29,454 - org.mule.lifecycle.AbstractLifecycleManager - INFO - Stopping: 'AMQP.dispatcher.1064499250'. Object is: AmqpMessageDispatcher
2014-04-22 09:49:29,454 - org.mule.lifecycle.AbstractLifecycleManager - INFO - Disposing: 'AMQP.dispatcher.1064499250'. Object is: AmqpMessageDispatcher
2014-04-22 09:49:29,455 - org.mule.transport.amqp.AmqpConnector - ERROR - clean connection shutdown; reason: Attempt to use closed connection
2014-04-22 09:49:29,461 - org.mule.transport.amqp.AmqpConnector - INFO - Connected: AmqpConnector
{
name=AMQP
lifecycle=stop
this=33c5919e
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[amqp]
serviceOverrides=<none>
}
2014-04-22 09:49:29,461 - org.mule.transport.amqp.AmqpConnector - INFO - Starting: AmqpConnector
{
name=AMQP
lifecycle=stop
this=33c5919e
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[amqp]
serviceOverrides=<none>
}
2014-04-22 09:49:29,461 - org.mule.lifecycle.AbstractLifecycleManager - INFO - Starting connector: AMQP
Updated 04/23 with the Exception received when JMS Transaction is added to AMQP outbound Endpoint:
Message : No active AMQP transaction found for endpoint: DefaultOutboundEndpoint{endpointUri=amqp://rabbitExchange, connector=AmqpConnector
{
name=AMQP
lifecycle=start
this=25ec1ff7
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[amqp]
serviceOverrides=<none>
}
, name='endpoint.amqp.rabbitExchange', mep=ONE_WAY, properties={exchangeDurable=true, exchangeType=fanout}, transactionConfig=Transaction {factory=org.mule.transport.jms.JmsTransactionFactory#6491b172, action=ALWAYS_JOIN, timeout=30000}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}
Code : MULE_ERROR--2
--------------------------------------------------------------------------------
Root Exception stack trace:
org.mule.transaction.IllegalTransactionStateException: No active AMQP transaction found for endpoint: DefaultOutboundEndpoint{endpointUri=amqp://rabbitExchange, connector=AmqpConnector
{
name=AMQP
lifecycle=start
this=25ec1ff7
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[amqp]
serviceOverrides=<none>
}
, name='endpoint.amqp.rabbitExchange', mep=ONE_WAY, properties= {exchangeDurable=true, exchangeType=fanout}, transactionConfig=Transaction {factory=org.mule.transport.jms.JmsTransactionFactory#6491b172, action=ALWAYS_JOIN, timeout=30000}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}
at org.mule.transport.amqp.AmqpMessageDispatcher.getEventChannel(AmqpMessageDispatcher.java:298)
at org.mule.transport.amqp.AmqpMessageDispatcher.doOutboundAction(AmqpMessageDispatcher.java:152)
at org.mule.transport.amqp.AmqpMessageDispatcher.doDispatch(AmqpMessageDispatcher.java:127)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
********************************************************************************
2014-04-23 10:52:03,178 - org.mule.transport.jms.JmsTransaction - WARN - Transaction rollback attempted, but no resource bound to org.mule.transport.jms.JmsTransaction#d4ac3d8f-caf6-11e3-bf9a-8b266a026dee [status=STATUS_MARKED_ROLLBACK, key=null, resource=null]
I see two options:
Make the JMS client a durable one and consume testtopic transactionally so if amqp:outbound-endpoint fails, the message will be redelivered.
Wrap the amqp:outbound-endpoint with until-successful to retry the outbound dispatches until the AMQP connector reconnects to RabbitMQ.