Coturn fails on TURN (allocation timeout) - webrtc

My coturn server always fails on turn. I've tried much variants of config, but nothing works(
Server is not NATted, and have only public IP.
Using next config:
domain=sip.domain.ru
realm=sip.domain.ru
server-name=sip.domain.ru
#listening-ip=0.0.0.0
#external-ip=0.0.0.0
external-ip=213.232.207.000
external-ip=sip.domain.ru
listening-port=3478
min-port=10000
max-port=20000
fingerprint
log-file=/var/log/coturn/turnserver.log
verbose
user=DavidMaze:Password
lt-cred-mech
#allow-loopback-peers
web-admin
web-admin-ip=213.232.207.000
web-admin-port=8090
cert=/usr/share/coturn/server.crt
pkey=/usr/share/coturn/server.key
cipher-list="ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384"
While calling, there is waiting for 60s, then in logs:
0: log file opened: /var/log/coturn/turnserver_2023-01-13.log
0: pid file created: /run/turnserver/turnserver.pid
0: IO method (main listener thread): epoll (with changelist)
0: WARNING: I cannot support STUN CHANGE_REQUEST functionality because only one IP address is provided
0: Wait for relay ports initialization...
0: relay 213.232.207.000 initialization...
0: relay 213.232.207.000 initialization done
0: relay ::1 initialization...
0: relay ::1 initialization done
0: Relay ports initialization done
0: IO method (general relay thread): epoll (with changelist)
0: IO method (general relay thread): epoll (with changelist)
0: IO method (general relay thread): epoll (with changelist)
0: IO method (general relay thread): epoll (with changelist)
0: turn server id=0 created
0: IO method (general relay thread): epoll (with changelist)
0: IO method (general relay thread): epoll (with changelist)
0: turn server id=1 created
0: turn server id=3 created
0: turn server id=2 created
0: IPv4. TLS/SCTP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IO method (general relay thread): epoll (with changelist)
0: IPv4. TLS/SCTP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: turn server id=5 created
0: turn server id=4 created
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/SCTP listener opened on : 213.232.207.000:3478
0: IO method (general relay thread): epoll (with changelist)
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/SCTP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv6. TLS/SCTP listener opened on : ::1:3478
0: turn server id=6 created
0: turn server id=7 created
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IO method (general relay thread): epoll (with changelist)
0: IPv6. TLS/SCTP listener opened on : ::1:5349
0: IO method (general relay thread): epoll (with changelist)
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IO method (general relay thread): epoll (with changelist)
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IO method (general relay thread): epoll (with changelist)
0: IO method (general relay thread): epoll (with changelist)
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IO method (general relay thread): epoll (with changelist)
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: turn server id=9 created
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: turn server id=11 created
0: IO method (general relay thread): epoll (with changelist)
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: turn server id=14 created
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: turn server id=13 created
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IO method (general relay thread): epoll (with changelist)
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: turn server id=10 created
0: turn server id=15 created
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: turn server id=8 created
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: turn server id=12 created
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. DTLS/UDP listener opened on: 127.0.0.1:3478
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv4. DTLS/UDP listener opened on: 127.0.0.1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:3478
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:5349
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv6. TLS/TCP listener opened on : ::1:3478
0: IPv4. DTLS/UDP listener opened on: 213.232.207.000:3478
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv6. TLS/TCP listener opened on : ::1:5349
0: IPv4. DTLS/UDP listener opened on: 213.232.207.000:5349
0: IPv6. DTLS/UDP listener opened on: ::1:3478
0: IPv6. DTLS/UDP listener opened on: ::1:5349
0: Total General servers: 16
0: IO method (auth thread): epoll (with changelist)
0: IO method (auth thread): epoll (with changelist)
0: IO method (auth thread): epoll (with changelist)
0: IO method (auth thread): epoll (with changelist)
0: IO method (auth thread): epoll (with changelist)
0: IO method (auth thread): epoll (with changelist)
0: IO method (auth thread): epoll (with changelist)
0: IO method (auth thread): epoll (with changelist)
0: IO method (admin thread): epoll (with changelist)
0: IPv4. TLS/SCTP listener opened on : 213.232.207.000:8090
0: IPv4. TLS/TCP listener opened on : 213.232.207.000:8090
0: IPv4. web-admin listener opened on : 213.232.207.000:8090
0: SQLite DB connection success: /var/lib/turn/turndb
5: handle_udp_packet: New UDP endpoint: local addr 213.232.207.000:3478, remote addr 188.162.5.118:34297
5: session 010000000000000001: realm <sip.domain.ru> user <>: incoming packet BINDING processed, success
5: session 010000000000000001: realm <sip.domain.ru> user <>: incoming packet message processed, error 401: Unauthorized
5: IPv4. Local relay addr: 213.232.207.000:11050
5: session 010000000000000001: new, realm=<sip.domain.ru>, username=<DavidMaze>, lifetime=600
5: session 010000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet ALLOCATE processed, success
6: session 010000000000000001: peer 213.232.207.000 lifetime updated: 300
6: session 010000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet CREATE_PERMISSION processed, success
7: handle_udp_packet: New UDP endpoint: local addr 213.232.207.000:3478, remote addr 87.103.193.000:56186
7: session 006000000000000001: realm <sip.domain.ru> user <>: incoming packet BINDING processed, success
7: session 006000000000000001: realm <sip.domain.ru> user <>: incoming packet message processed, error 401: Unauthorized
7: IPv4. Local relay addr: 213.232.207.000:16236
7: session 006000000000000001: new, realm=<sip.domain.ru>, username=<DavidMaze>, lifetime=600
7: session 006000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet ALLOCATE processed, success
7: session 006000000000000001: peer 213.232.207.000 lifetime updated: 300
7: session 006000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet CREATE_PERMISSION processed, success
15: session 010000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet BINDING processed, success
17: session 006000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet BINDING processed, success
26: session 010000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet BINDING processed, success
27: session 006000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet BINDING processed, success
36: session 010000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet BINDING processed, success
38: session 006000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet BINDING processed, success
46: session 010000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet BINDING processed, success
47: handle_udp_packet: New UDP endpoint: local addr 213.232.207.000:3478, remote addr 188.162.5.118:23038
47: session 008000000000000001: realm <sip.domain.ru> user <>: incoming packet BINDING processed, success
48: session 008000000000000001: realm <sip.domain.ru> user <>: incoming packet message processed, error 401: Unauthorized
48: IPv4. Local relay addr: 213.232.207.000:16208
48: session 008000000000000001: new, realm=<sip.domain.ru>, username=<DavidMaze>, lifetime=600
48: session 008000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet ALLOCATE processed, success
48: session 006000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet BINDING processed, success
48: session 008000000000000001: peer 213.232.207.000 lifetime updated: 300
48: session 008000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet CREATE_PERMISSION processed, success
50: session 010000000000000001: refreshed, realm=<sip.domain.ru>, username=<DavidMaze>, lifetime=0
50: session 010000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet REFRESH processed, success
50: session 008000000000000001: refreshed, realm=<sip.domain.ru>, username=<DavidMaze>, lifetime=0
50: session 008000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet REFRESH processed, success
50: session 006000000000000001: refreshed, realm=<sip.domain.ru>, username=<DavidMaze>, lifetime=0
50: session 006000000000000001: realm <sip.domain.ru> user <DavidMaze>: incoming packet REFRESH processed, success
51: session 008000000000000001: usage: realm=<sip.domain.ru>, username=<DavidMaze>, rp=5, rb=364, sp=5, sb=508
51: session 008000000000000001: closed (2nd stage), user <DavidMaze> realm <sip.domain.ru> origin <>, local 213.232.207.000:3478, remote 188.162.5.118:23038, reason: allocation timeout
51: session 008000000000000001: delete: realm=<sip.domain.ru>, username=<DavidMaze>
51: session 008000000000000001: peer 213.232.207.000 deleted
51: session 010000000000000001: usage: realm=<sip.domain.ru>, username=<DavidMaze>, rp=10, rb=592, sp=10, sb=1032
51: session 010000000000000001: closed (2nd stage), user <DavidMaze> realm <sip.domain.ru> origin <>, local 213.232.207.000:3478, remote 188.162.5.118:34297, reason: allocation timeout
51: session 010000000000000001: delete: realm=<sip.domain.ru>, username=<DavidMaze>
51: session 010000000000000001: peer 213.232.207.000 deleted
51: session 006000000000000001: usage: realm=<sip.domain.ru>, username=<DavidMaze>, rp=58, rb=7500, sp=9, sb=892
51: session 006000000000000001: closed (2nd stage), user <DavidMaze> realm <sip.domain.ru> origin <>, local 213.232.207.000:3478, remote 87.103.193.000:56186, reason: allocation timeout
51: session 006000000000000001: delete: realm=<sip.domain.ru>, username=<DavidMaze>
51: session 006000000000000001: peer 213.232.207.000 deleted
Also, 2 days ago i was having 403: forbidden IP. But it was fixed by commenting listening-ip

Fixed issue. For others:
At first, check issue on different browsers. I've detected, that call works on Mozilla Firefox, while don't work on Chromium-based browsers;
You can enable extra-verbose mode by -V flag (uppercase) or --Verbose. This can help, but logs are very annoying and no need to see them in 95% times;
While testing TURN-server via very popular tool WebRTC sample - Trickle ICE, you can see authentication failed? with relay in next line. This might not be problem, check this with other working TURN-server (example)
Check client's firewall for blocking ports of STUN/TURN servers, for port ranges of TURN. That was my case, client's firewall was blocking 24000-64000 ports.

Related

Unable to connect to host with Apache Camel SFTP with public/private ssh keys

Im facing a problem when im trying to use Apache Camel to connect to a SFTP host which is controlled by a business partner. I have created a ssh public/private keypair and they have installed the public key at their server and through both fileZilla and shell sftp im able to connect without any problems.
But when im trying to connect with apache Camel i receive an error : Auth fail for methods 'publickey,password'
Im aware that theres an issue about the jsch library in Camel, but i have upgraded to Camel version 3.19 and according to the dependency tree that can be viewed by ./gradlew dependencies' im using the fork of jsch 'mwiede' version 0.2.1
The SFTP server that im trying to connect to is apparantly rather old but i have no influence on that. When using the shell sftp command it was nescessary to use an option '-oHostKeyAlgorithms=+ssh-dss' but after that its working without a problem.
Im running it locally from a MacOS in IntelliJ - with springboot 2.6.7 and java 17
The Camel route is looking like this
public void configure() {
String privateKeyString = Files.readString(Path.of("/Users/jaan/.ssh/id_rsa_cloud-integration_test"), StandardCharsets.UTF_8);
getCamelContext().getRegistry().bind("myPrivateKey", privateKeyString.getBytes(StandardCharsets.UTF_8));
from(aws2S3(bucketId + "?amazonS3Client=#s3Client" + awsGetObjectUriParams))
.choice()
.when(body().isNull())
.log("Looking for files in S3 bucket - but found none")
.otherwise()
.log("Found file in S3 [${headers.CamelAwsS3Key}]")
.process(exchange -> {
exchange.getIn().setHeader("CamelAwsS3BucketDestinationName", bucketId);
exchange.getIn().setHeader("CamelAwsS3DestinationKey", generateFileName(exchange));
log.info("Uploading file to S3 bucket [{}] and prefix [{}]", bucketId, exchange.getIn().getHeader("CamelAwsS3DestinationKey"));
})
.to(aws2S3(bucketId + "?amazonS3Client=#s3Client&operation=copyObject"))
.to(sftp(host+":22/test?maximumReconnectAttempts=1")
.binary(true)
.privateKey("#myPrivateKey")
.username(sshUserName)
.jschLoggingLevel("TRACE")
.serverHostKeys("ssh-dss")
.knownHostsFile("/Users/jka/.ssh/known_hosts")
I have also tried to simply copy the ssh private key into the route as a string.
The stacktrace that im receiving is below
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.6.7)
dk.ds.cargo.Application : Starting Application using Java 17.0.5 on COM1865 with PID 47585 (/Users/jka/workspace_git/bis-cargo-programblade/build/classes/java/main started by jka in /Users/jka/workspace_git/bis-cargo-programblade)
dk.ds.cargo.Application : Running with Spring Boot v2.6.7, Spring v5.3.19
dk.ds.cargo.Application : The following 1 profile is active: "local"
o.s.b.devtools.restart.ChangeableUrls : The Class-Path manifest attribute in /Users/jka/.m2/repository/com/sun/xml/bind/jaxb-core/2.3.0/jaxb-core-2.3.0.jar referenced one or more files that do not exist: file:/Users/jka/.m2/repository/com/sun/xml/bind/jaxb-core/2.3.0/jaxb-api.jar
.e.DevToolsPropertyDefaultsPostProcessor : Devtools property defaults active! Set 'spring.devtools.add-properties' to 'false' to disable
.e.DevToolsPropertyDefaultsPostProcessor : For additional web related logging consider setting the 'logging.level.web' property to 'DEBUG'
o.s.cloud.context.scope.GenericScope : BeanFactory id=5934d1b4-b141-3085-8f00-cedb8da5fbc5
o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
o.apache.catalina.core.StandardService : Starting service [Tomcat]
org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.62]
o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 3519 ms
o.s.s.web.DefaultSecurityFilterChain : Will secure any request with [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter#6587be01, org.springframework.security.web.context.SecurityContextPersistenceFilter#5943fb8e, org.springframework.security.web.header.HeaderWriterFilter#1182b1fe, org.springframework.security.web.csrf.CsrfFilter#47903918, org.springframework.security.web.authentication.logout.LogoutFilter#268e02b2, org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter#66a704a1, org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter#4c442cf0, org.springframework.security.web.authentication.ui.DefaultLogoutPageGeneratingFilter#3a072250, org.springframework.security.web.savedrequest.RequestCacheAwareFilter#1bbe8c42, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter#491c5377, org.springframework.security.web.authentication.AnonymousAuthenticationFilter#2100053f, org.springframework.security.web.session.SessionManagementFilter#7cca7c8d, org.springframework.security.web.access.ExceptionTranslationFilter#1a79bb88, org.springframework.security.web.access.intercept.FilterSecurityInterceptor#2297c946]
o.s.b.d.a.OptionalLiveReloadServer : LiveReload server is running on port 35729
o.s.b.a.e.web.EndpointLinksResolver : Exposing 2 endpoint(s) beneath base path '/monitor'
o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
d.d.cargo.programblade.ProgrambladRoute : host <host ip adress>
d.d.cargo.programblade.ProgrambladRoute : userName <username>
.c.i.e.DefaultAutowiredLifecycleStrategy : Autowired property: amazonS3Client on component: aws2-s3 as exactly one instance of type: software.amazon.awssdk.services.s3.S3Client (software.amazon.awssdk.services.s3.DefaultS3Client) found in the registry
o.a.c.impl.engine.AbstractCamelContext : Apache Camel 3.19.0 (camel-1) is starting
o.a.c.impl.engine.AbstractCamelContext : Routes startup (started:1)
o.a.c.impl.engine.AbstractCamelContext : Started route1 (aws2-s3://<bucket ID>)
o.a.c.impl.engine.AbstractCamelContext : Apache Camel 3.19.0 (camel-1) started in 1s687ms (build:85ms init:777ms start:825ms)
dk.ds.cargo.Application : Started Application in 11.607 seconds (JVM running for 12.253)
dk.ds.cargo.Application : Spring application is ready to serve!
route1 : Found file in S3 [s3 bucket prefix]
d.d.cargo.programblade.ProgrambladRoute : Uploading file to S3 bucket [bucketID] and prefix [prefix]
o.a.c.c.file.remote.SftpOperations : JSCH -> Connecting to <host IP adress> port 22
o.a.c.c.file.remote.SftpOperations : JSCH -> Connection established
o.a.c.c.file.remote.SftpOperations : JSCH -> Remote version string: SSH-2.0-9.99 sshlib
o.a.c.c.file.remote.SftpOperations : JSCH -> Local version string: SSH-2.0-JSCH_0.2.1
o.a.c.c.file.remote.SftpOperations : JSCH -> CheckCiphers: chacha20-poly1305#openssh.com
o.a.c.c.file.remote.SftpOperations : JSCH -> CheckKexes: curve25519-sha256,curve25519-sha256#libssh.org,curve448-sha512
o.a.c.c.file.remote.SftpOperations : JSCH -> CheckSignatures: ssh-ed25519,ssh-ed448
o.a.c.c.file.remote.SftpOperations : JSCH -> SSH_MSG_KEXINIT sent
o.a.c.c.file.remote.SftpOperations : JSCH -> SSH_MSG_KEXINIT received
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: server: diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: server: ssh-dss
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: server: aes256-ctr,twofish256-ctr,twofish-ctr,aes128-ctr,twofish128-ctr,3des-ctr,cast128-ctr,aes256-cbc,twofish256-cbc,twofish-cbc,aes128-cbc,twofish128-cbc,blowfish-cbc,3des-cbc,arcfour,cast128-cbc
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: server: aes256-ctr,twofish256-ctr,twofish-ctr,aes128-ctr,twofish128-ctr,3des-ctr,cast128-ctr,aes256-cbc,twofish256-cbc,twofish-cbc,aes128-cbc,twofish128-cbc,blowfish-cbc,3des-cbc,arcfour,cast128-cbc
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: server: hmac-sha2-512,hmac-sha2-256,hmac-sha1,hmac-md5,hmac-sha1-96,hmac-md5-96
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: server: hmac-sha2-512,hmac-sha2-256,hmac-sha1,hmac-md5,hmac-sha1-96,hmac-md5-96
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: server: zlib,none
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: server: zlib,none
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: server:
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: server:
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: client: curve25519-sha256,curve25519-sha256#libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,ext-info-c
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: client: ssh-dss
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: client: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm#openssh.com,aes256-gcm#openssh.com
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: client: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm#openssh.com,aes256-gcm#openssh.com
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: client: hmac-sha2-256-etm#openssh.com,hmac-sha2-512-etm#openssh.com,hmac-sha1-etm#openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: client: hmac-sha2-256-etm#openssh.com,hmac-sha2-512-etm#openssh.com,hmac-sha1-etm#openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: client: none
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: client: none
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: client:
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: client:
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: algorithm: diffie-hellman-group-exchange-sha256
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: host key algorithm: ssh-dss
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: server->client cipher: aes128-ctr MAC: hmac-sha2-256 compression: none
o.a.c.c.file.remote.SftpOperations : JSCH -> kex: client->server cipher: aes128-ctr MAC: hmac-sha2-256 compression: none
o.a.c.c.file.remote.SftpOperations : JSCH -> SSH_MSG_KEX_DH_GEX_REQUEST(2048<3072<8192) sent
o.a.c.c.file.remote.SftpOperations : JSCH -> expecting SSH_MSG_KEX_DH_GEX_GROUP
o.a.c.c.file.remote.SftpOperations : JSCH -> SSH_MSG_KEX_DH_GEX_INIT sent
o.a.c.c.file.remote.SftpOperations : JSCH -> expecting SSH_MSG_KEX_DH_GEX_REPLY
o.a.c.c.file.remote.SftpOperations : JSCH -> ssh_dss_verify: signature true
o.a.c.c.file.remote.SftpOperations : JSCH -> Host '<IP adress>' is known and matches the DSA host key
o.a.c.c.file.remote.SftpOperations : JSCH -> SSH_MSG_NEWKEYS sent
o.a.c.c.file.remote.SftpOperations : JSCH -> SSH_MSG_NEWKEYS received
o.a.c.c.file.remote.SftpOperations : JSCH -> SSH_MSG_SERVICE_REQUEST sent
o.a.c.c.file.remote.SftpOperations : JSCH -> SSH_MSG_SERVICE_ACCEPT received
o.a.c.c.file.remote.SftpOperations : JSCH -> Authentications that can continue: publickey
o.a.c.c.file.remote.SftpOperations : JSCH -> Next authentication method: publickey
o.a.c.c.file.remote.SftpOperations : JSCH -> Disconnecting from <IP adress> port 22
o.a.c.c.file.remote.RemoteFileProducer : Writing file failed with: Cannot connect to sftp://<username>#<IP adress>:22
o.a.c.p.e.DefaultErrorHandler : Failed delivery for (MessageId: 1EFB2ABB1EFFD39-0000000000000000 on ExchangeId: 1EFB2ABB1EFFD39-0000000000000000). Exhausted after delivery attempt: 1 caught: org.apache.camel.component.file.GenericFileOperationFailedException: Cannot connect to sftp://<username>#<IP adress>:22
Message History (source location and message history is disabled)
---------------------------------------------------------------------------------------------------------------------------------------
Source ID Processor Elapsed (ms)
route1/route1 from[aws2-s3://ds-cloud-integration-test?amazonS3C 12845806
...
route1/to2 sftp://<IP adress>:22/test-folder?maximumReconnec 0
Stacktrace
---------------------------------------------------------------------------------------------------------------------------------------
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot connect to sftp://<username>#<IP adress>:22
at org.apache.camel.component.file.remote.SftpOperations.connect(SftpOperations.java:137)
at org.apache.camel.component.file.remote.RemoteFileProducer.connectIfNecessary(RemoteFileProducer.java:184)
at org.apache.camel.component.file.remote.RemoteFileProducer.preWriteCheck(RemoteFileProducer.java:133)
at org.apache.camel.component.file.GenericFileProducer.processExchange(GenericFileProducer.java:113)
at org.apache.camel.component.file.remote.RemoteFileProducer.process(RemoteFileProducer.java:61)
at org.apache.camel.support.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:66)
at org.apache.camel.processor.SendProcessor.lambda$process$2(SendProcessor.java:191)
at org.apache.camel.support.cache.DefaultProducerCache.doInAsyncProducer(DefaultProducerCache.java:327)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:190)
at org.apache.camel.processor.errorhandler.RedeliveryErrorHandler$SimpleTask.run(RedeliveryErrorHandler.java:477)
at org.apache.camel.impl.engine.DefaultReactiveExecutor$Worker.schedule(DefaultReactiveExecutor.java:181)
at org.apache.camel.impl.engine.DefaultReactiveExecutor.scheduleMain(DefaultReactiveExecutor.java:59)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:175)
at org.apache.camel.impl.engine.CamelInternalProcessor.process(CamelInternalProcessor.java:392)
at org.apache.camel.component.aws2.s3.AWS2S3Consumer.processBatch(AWS2S3Consumer.java:300)
at org.apache.camel.component.aws2.s3.AWS2S3Consumer.poll(AWS2S3Consumer.java:175)
at org.apache.camel.support.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:202)
at org.apache.camel.support.ScheduledPollConsumer.run(ScheduledPollConsumer.java:116)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: com.jcraft.jsch.JSchException: Auth fail for methods 'publickey,password'
at com.jcraft.jsch.Session.connect(Session.java:532)
at org.apache.camel.component.file.remote.SftpOperations.tryConnect(SftpOperations.java:160)
at org.apache.camel.support.task.ForegroundTask.run(ForegroundTask.java:92)
at org.apache.camel.component.file.remote.SftpOperations.connect(SftpOperations.java:135)
... 23 common frames omitted
2022-12-16 13:27:03.248 WARN o.a.c.component.aws2.s3.AWS2S3Consumer : Exchange failed, so rolling back message status: Exchange[1EFB2ABB1EFFD39-0000000000000000]
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot connect to sftp://<username>#<IP adress>:22
at org.apache.camel.component.file.remote.SftpOperations.connect(SftpOperations.java:137)
at org.apache.camel.component.file.remote.RemoteFileProducer.connectIfNecessary(RemoteFileProducer.java:184)
at org.apache.camel.component.file.remote.RemoteFileProducer.preWriteCheck(RemoteFileProducer.java:133)
at org.apache.camel.component.file.GenericFileProducer.processExchange(GenericFileProducer.java:113)
at org.apache.camel.component.file.remote.RemoteFileProducer.process(RemoteFileProducer.java:61)
at org.apache.camel.support.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:66)
at org.apache.camel.processor.SendProcessor.lambda$process$2(SendProcessor.java:191)
at org.apache.camel.support.cache.DefaultProducerCache.doInAsyncProducer(DefaultProducerCache.java:327)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:190)
at org.apache.camel.processor.errorhandler.RedeliveryErrorHandler$SimpleTask.run(RedeliveryErrorHandler.java:477)
at org.apache.camel.impl.engine.DefaultReactiveExecutor$Worker.schedule(DefaultReactiveExecutor.java:181)
at org.apache.camel.impl.engine.DefaultReactiveExecutor.scheduleMain(DefaultReactiveExecutor.java:59)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:175)
at org.apache.camel.impl.engine.CamelInternalProcessor.process(CamelInternalProcessor.java:392)
at org.apache.camel.component.aws2.s3.AWS2S3Consumer.processBatch(AWS2S3Consumer.java:300)
at org.apache.camel.component.aws2.s3.AWS2S3Consumer.poll(AWS2S3Consumer.java:175)
at org.apache.camel.support.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:202)
at org.apache.camel.support.ScheduledPollConsumer.run(ScheduledPollConsumer.java:116)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: com.jcraft.jsch.JSchException: Auth fail for methods 'publickey,password'
at com.jcraft.jsch.Session.connect(Session.java:532)
at org.apache.camel.component.file.remote.SftpOperations.tryConnect(SftpOperations.java:160)
at org.apache.camel.support.task.ForegroundTask.run(ForegroundTask.java:92)
at org.apache.camel.component.file.remote.SftpOperations.connect(SftpOperations.java:135)
... 23 common frames omitted
2022-12-16 13:27:03.249 WARN o.a.c.component.aws2.s3.AWS2S3Consumer : Error processing exchange. Exchange[1EFB2ABB1EFFD39-0000000000000000]. Caused by: [org.apache.camel.component.file.GenericFileOperationFailedException - Cannot connect to sftp://<username>#<IP adress>:22]
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot connect to sftp://<username>#<IP adress>:22
at org.apache.camel.component.file.remote.SftpOperations.connect(SftpOperations.java:137)
at org.apache.camel.component.file.remote.RemoteFileProducer.connectIfNecessary(RemoteFileProducer.java:184)
at org.apache.camel.component.file.remote.RemoteFileProducer.preWriteCheck(RemoteFileProducer.java:133)
at org.apache.camel.component.file.GenericFileProducer.processExchange(GenericFileProducer.java:113)
at org.apache.camel.component.file.remote.RemoteFileProducer.process(RemoteFileProducer.java:61)
at org.apache.camel.support.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:66)
at org.apache.camel.processor.SendProcessor.lambda$process$2(SendProcessor.java:191)
at org.apache.camel.support.cache.DefaultProducerCache.doInAsyncProducer(DefaultProducerCache.java:327)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:190)
at org.apache.camel.processor.errorhandler.RedeliveryErrorHandler$SimpleTask.run(RedeliveryErrorHandler.java:477)
at org.apache.camel.impl.engine.DefaultReactiveExecutor$Worker.schedule(DefaultReactiveExecutor.java:181)
at org.apache.camel.impl.engine.DefaultReactiveExecutor.scheduleMain(DefaultReactiveExecutor.java:59)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:175)
at org.apache.camel.impl.engine.CamelInternalProcessor.process(CamelInternalProcessor.java:392)
at org.apache.camel.component.aws2.s3.AWS2S3Consumer.processBatch(AWS2S3Consumer.java:300)
at org.apache.camel.component.aws2.s3.AWS2S3Consumer.poll(AWS2S3Consumer.java:175)
at org.apache.camel.support.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:202)
at org.apache.camel.support.ScheduledPollConsumer.run(ScheduledPollConsumer.java:116)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: com.jcraft.jsch.JSchException: Auth fail for methods 'publickey,password'
at com.jcraft.jsch.Session.connect(Session.java:532)
at org.apache.camel.component.file.remote.SftpOperations.tryConnect(SftpOperations.java:160)
at org.apache.camel.support.task.ForegroundTask.run(ForegroundTask.java:92)
at org.apache.camel.component.file.remote.SftpOperations.connect(SftpOperations.java:135)
... 23 common frames omitted
I hope i can get some help to make this work and avoid being forced to try to implement it in plain java with a sftp library
In relation to this issue : Auth fail with JSch against libssh server with "rsa-sha2-512"
the solution is to set the 2 properties
.serverHostKeys("ssh-dss")
.publicKeyAcceptedAlgorithms("ssh-rsa")
and then it worked

otbr-agent with NRF52840 as RCP not working

I am using custom linux board and taken latest code of otbr-agent.
Also taken latest code of ot-nrf528xx for NRF52840.
otbr-agent is able to communicate with RCP successfully and my openthread network is created as well.
But randomly it files following error and exits:
otbr-agent[14116]: 00:35:22.736 [WARN]-PLAT----: radio tx timeout
otbr-agent[14116]: 00:35:22.736 [CRIT]-PLAT----: HandleRcpTimeout() at
/usr/src/debug/otbr/git-r0/ot-br-posix/third_party/openthread/repo/src/lib/spinel
/radio_spinel_impl.hpp:2218: RadioSpinelNoResponse
Full logs of otbr-agent from start are as below and it was exited without any activity.
Once I was able to commission and communicate with device and after it got exited with same error.
Is it issue from otbr? or rcp?
#/usr/sbin/otbr-agent -I wpan0 -B wlan0 spinel+hdlc+uart:///dev/ttymxc0 trel://wlan0 -v
otbr-agent[14116]: [INFO]-UTILS---: Running 0.3.0-fe1263578-dirty
otbr-agent[14116]: [INFO]-UTILS---: Thread version: 1.2.0
otbr-agent[14116]: [INFO]-UTILS---: Thread interface: wpan0
otbr-agent[14116]: [INFO]-UTILS---: Backbone interface: wlan0
otbr-agent[14116]: [INFO]-UTILS---: Radio URL: spinel+hdlc+uart:///dev/ttymxc0
otbr-agent[14116]: [INFO]-UTILS---: Radio URL: trel://wlan0
otbr-agent[14116]: 49d.18:38:21.580 [INFO]-PLAT----: RCP reset: RESET_POWER_ON
otbr-agent[14116]: 49d.18:38:21.609 [NOTE]-PLAT----: RCP API Version: 5
otbr-agent[14116]: 00:00:00.073 [INFO]-CORE----: [settings] Read NetworkInfo {rloc:0xe000, extaddr:ae12db553a8f7115, role:leader, mode:0x0f, version:3,
keyseq:0x0, ...
otbr-agent[14116]: 00:00:00.075 [INFO]-CORE----: [settings] ... pid:0x54beb0f8, mlecntr:0x1f9ed, maccntr:0x1f7f2, mliid:7c75ca665c72a43b}
otbr-agent[14116]: 00:00:00.146 [INFO]-CORE----: [settings] Read OmrPrefix fd7a:10e5:333a:5b12::/64
otbr-agent[14116]: 00:00:00.150 [INFO]-CORE----: [settings] Read OnLinkPrefix fd2f:7c27:62f6:0::/64
otbr-agent[14116]: 00:00:00.158 [INFO]-BR------: Infra interface (7) state changed: NOT RUNNING -> RUNNING
otbr-agent[14116]: [INFO]-AGENT---: Set state callback: OK
otbr-agent[14116]: 00:00:00.159 [INFO]-SRP-----: [server] selected port 53535
otbr-agent[14116]: 00:00:00.173 [INFO]-N-DATA--: Publisher: Publishing DNS/SRP service unicast (ml-eid, port:53535)
otbr-agent[14116]: 00:00:00.174 [INFO]-N-DATA--: Publisher: DNS/SRP service - State: NoEntry -> ToAdd
otbr-agent[14116]: [INFO]-AGENT---: Stop Thread Border Agent
otbr-agent[14116]: [INFO]-ADPROXY-: Stopped
otbr-agent[14116]: [INFO]-AGENT---: Initialize OpenThread Border Router Agent: OK
otbr-agent[14116]: [INFO]-UTILS---: Border router agent started.
otbr-agent[14116]: 00:00:00.202 [INFO]-CORE----: Notifier: StateChanged (0x101fc300) [KeySeqCntr NetData Channel PanId NetName ExtPanId NetworkKey PSKc
SecPolicy ...
otbr-agent[14116]: 00:00:00.213 [INFO]-CORE----: Notifier: StateChanged (0x101fc300) ... ActDset]
otbr-agent[14116]: 00:00:00.214 [INFO]-MLE-----: [announce-sender] ChannelMask:{ 11-26 }, period:21500
otbr-agent[14116]: 00:00:00.214 [INFO]-MLE-----: [announce-sender] StartingChannel:18
otbr-agent[14116]: 00:00:00.222 [INFO]-MLE-----: [announce-sender] StartingChannel:18
otbr-agent[14116]: 00:00:00.250 [INFO]-PLAT----: [netif] Host netif is down
otbr-agent[14116]: 00:00:00.262 [INFO]-PLAT----: [netif] Added multicast address ff02::1
otbr-agent[14116]: 00:00:00.262 [INFO]-PLAT----: [netif] Added multicast address ff03::1
otbr-agent[14116]: 00:00:00.263 [INFO]-PLAT----: [netif] Added multicast address ff03::fc
otbr-agent[14116]: 00:00:00.281 [INFO]-PLAT----: [netif] Sent request#1 to add fe80::ac12:db55:3a8f:7115/64
otbr-agent[14116]: 00:00:00.282 [NOTE]-MLE-----: Role disabled -> detached
otbr-agent[14116]: 00:00:00.297 [INFO]-PLAT----: [netif] Sent request#2 to add fd5d:e08d:c5ec:42fc:7c75:ca66:5c72:a43b/64
otbr-agent[14116]: 00:00:00.313 [INFO]-PLAT----: [netif] Added multicast address ff32:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:00.313 [INFO]-PLAT----: [netif] Added multicast address ff33:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:00.323 [INFO]-PLAT----: [netif] Sent request#3 to add fd5d:e08d:c5ec:42fc:0:ff:fe00:e000/64
otbr-agent[14116]: 00:00:00.323 [INFO]-MLE-----: Attempt to become router
otbr-agent[14116]: 00:00:00.325 [INFO]-CORE----: [settings] Read NetworkInfo {rloc:0xe000, extaddr:ae12db553a8f7115, role:leader, mode:0x0f, version:3,
keyseq:0x0, ...
otbr-agent[14116]: 00:00:00.327 [INFO]-CORE----: [settings] ... pid:0x54beb0f8, mlecntr:0x1f9ed, maccntr:0x1f7f2, mliid:7c75ca665c72a43b}
otbr-agent[14116]: 00:00:00.337 [INFO]-CORE----: [settings] Saved NetworkInfo {rloc:0xe000, extaddr:ae12db553a8f7115, role:leader, mode:0x0f, version:3,
keyseq:0x0, ...
otbr-agent[14116]: 00:00:00.345 [INFO]-CORE----: [settings] ... pid:0x54beb0f8, mlecntr:0x1fdd6, maccntr:0x1fbda, mliid:7c75ca665c72a43b}
otbr-agent[14116]: 00:00:00.345 [INFO]-MLE-----: Send Link Request (ff02:0:0:0:0:0:0:2)
otbr-agent[14116]: 00:00:00.345 [INFO]-CORE----: Notifier: StateChanged (0x0100103d) [Ip6+ Role LLAddr MLAddr Rloc+ Ip6Mult+ NetifState]
otbr-agent[14116]: 00:00:00.353 [INFO]-MLE-----: [announce-sender] Stopped
otbr-agent[14116]: 00:00:00.354 [NOTE]-PLAT----: [netif] Changing interface state to up.
otbr-agent[14116]: [INFO]-AGENT---: Thread is down
otbr-agent[14116]: [INFO]-AGENT---: Stop Thread Border Agent
otbr-agent[14116]: [INFO]-ADPROXY-: Stopped
otbr-agent[14116]: 00:00:00.475 [WARN]-PLAT----: [netif] Failed to transmit, error:Drop
otbr-agent[14116]: 00:00:00.539 [INFO]-CORE----: Notifier: StateChanged (0x00001000) [Ip6Mult+]
otbr-agent[14116]: 00:00:00.551 [INFO]-PLAT----: [trel] Interface address added successfully.
otbr-agent[14116]: 00:00:00.607 [INFO]-MAC-----: Sent IPv6 UDP msg, len:82, chksum:51e5, to:0xffff, sec:no, prio:net, radio:all
otbr-agent[14116]: 00:00:00.626 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:00:00.626 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:2]:19788
otbr-agent[14116]: 00:00:00.645 [NOTE]-PLAT----: [netif] ADD [U] fe80::ac12:db55:3a8f:7115 (already subscribed, ignored)
otbr-agent[14116]: 00:00:00.646 [WARN]-PLAT----: [netif] Unexpected address type (6).
otbr-agent[14116]: 00:00:00.646 [WARN]-PLAT----: [netif] Unexpected address type (8).
otbr-agent[14116]: 00:00:00.674 [INFO]-PLAT----: [netif] Succeeded to process request#1
otbr-agent[14116]: 00:00:00.714 [NOTE]-PLAT----: [netif] ADD [U] fd5d:e08d:c5ec:42fc:7c75:ca66:5c72:a43b (already subscribed, ignored)
otbr-agent[14116]: 00:00:00.714 [WARN]-PLAT----: [netif] Unexpected address type (6).
otbr-agent[14116]: 00:00:00.715 [WARN]-PLAT----: [netif] Unexpected address type (8).
otbr-agent[14116]: 00:00:00.760 [WARN]-PLAT----: [netif] Failed to transmit, error:Drop
otbr-agent[14116]: 00:00:00.760 [INFO]-PLAT----: [netif] Succeeded to process request#2
otbr-agent[14116]: 00:00:00.824 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::1:3
otbr-agent[14116]: 00:00:00.824 [NOTE]-PLAT----: [netif] ADD [U] fd5d:e08d:c5ec:42fc:0:ff:fe00:e000 (already subscribed, ignored)
otbr-agent[14116]: 00:00:00.825 [WARN]-PLAT----: [netif] Unexpected address type (6).
otbr-agent[14116]: 00:00:00.825 [WARN]-PLAT----: [netif] Unexpected address type (8).
otbr-agent[14116]: 00:00:00.825 [INFO]-PLAT----: [netif] Succeeded to process request#3
otbr-agent[14116]: 00:00:00.825 [INFO]-PLAT----: [netif] Host netif is up
otbr-agent[14116]: 00:00:01.220 [WARN]-PLAT----: [netif] Failed to transmit, error:Drop
otbr-agent[14116]: 00:00:01.222 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::1:3
otbr-agent[14116]: 00:00:01.222 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff33:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:01.223 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff32:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:01.223 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::fc
otbr-agent[14116]: 00:00:01.223 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::1
otbr-agent[14116]: 00:00:01.223 [INFO]-CORE----: Notifier: StateChanged (0x00001000) [Ip6Mult+]
otbr-agent[14116]: 00:00:02.157 [NOTE]-MLE-----: RLOC16 e000 -> fffe
otbr-agent[14116]: 00:00:02.163 [INFO]-PLAT----: [netif] Sent request#4 to remove fd5d:e08d:c5ec:42fc:0:ff:fe00:e000/64
otbr-agent[14116]: 00:00:02.165 [INFO]-MLE-----: AttachState Idle -> Start
otbr-agent[14116]: 00:00:02.166 [INFO]-CORE----: Notifier: StateChanged (0x10000040) [Rloc- ActDset]
otbr-agent[14116]: 00:00:02.181 [NOTE]-PLAT----: [netif] DEL [U] fd5d:e08d:c5ec:42fc:0:ff:fe00:e000 (not found, ignored)
otbr-agent[14116]: 00:00:02.181 [WARN]-PLAT----: [netif] Unexpected address type (6).
otbr-agent[14116]: 00:00:02.181 [WARN]-PLAT----: [netif] Unexpected address type (8).
otbr-agent[14116]: 00:00:02.182 [INFO]-PLAT----: [netif] Succeeded to process request#4
otbr-agent[14116]: 00:00:02.413 [NOTE]-MLE-----: Attempt to attach - attempt 1, any-partition reattaching with Active Dataset
otbr-agent[14116]: 00:00:02.413 [INFO]-MLE-----: AttachState Start -> ParentReqRouters
otbr-agent[14116]: 00:00:02.414 [INFO]-MLE-----: Send Parent Request to routers (ff02:0:0:0:0:0:0:2)
otbr-agent[14116]: 00:00:02.433 [INFO]-MAC-----: Sent IPv6 UDP msg, len:84, chksum:503d, to:0xffff, sec:no, prio:net, radio:all
otbr-agent[14116]: 00:00:02.434 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:00:02.434 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:2]:19788
otbr-agent[14116]: 00:00:03.164 [INFO]-MLE-----: AttachState ParentReqRouters -> ParentReqReeds
otbr-agent[14116]: 00:00:03.164 [INFO]-MLE-----: Send Parent Request to routers and REEDs (ff02:0:0:0:0:0:0:2)
otbr-agent[14116]: 00:00:03.183 [INFO]-MAC-----: Sent IPv6 UDP msg, len:84, chksum:3d1a, to:0xffff, sec:no, prio:net, radio:all
otbr-agent[14116]: 00:00:03.183 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:00:03.183 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:2]:19788
otbr-agent[14116]: 00:00:04.415 [INFO]-MLE-----: AttachState ParentReqReeds -> Idle
otbr-agent[14116]: 00:00:04.416 [NOTE]-MLE-----: Allocate router id 56
otbr-agent[14116]: 00:00:04.416 [NOTE]-MLE-----: RLOC16 fffe -> e000
otbr-agent[14116]: 00:00:04.427 [INFO]-PLAT----: [netif] Sent request#5 to add fd5d:e08d:c5ec:42fc:0:ff:fe00:e000/64
otbr-agent[14116]: 00:00:04.428 [NOTE]-MLE-----: Role detached -> leader
otbr-agent[14116]: 00:00:04.449 [INFO]-PLAT----: [netif] Sent request#6 to add fd5d:e08d:c5ec:42fc:0:ff:fe00:fc00/64
otbr-agent[14116]: 00:00:04.452 [INFO]-PLAT----: [netif] Added multicast address ff02::2
otbr-agent[14116]: 00:00:04.453 [INFO]-PLAT----: [netif] Added multicast address ff03::2
otbr-agent[14116]: 00:00:04.459 [NOTE]-MLE-----: Leader partition id 0x6f7040fb
otbr-agent[14116]: 00:00:04.459 [INFO]-CORE----: Notifier: StateChanged (0x100012a5) [Ip6+ Role Rloc+ PartitionId NetData Ip6Mult+ ActDset]
otbr-agent[14116]: 00:00:04.461 [INFO]-MLE-----: Send Data Response (ff02:0:0:0:0:0:0:1)
otbr-agent[14116]: 00:00:04.461 [INFO]-BBR-----: PBBR state: None
otbr-agent[14116]: 00:00:04.463 [INFO]-BBR-----: Domain Prefix: ::/0, state: None
otbr-agent[14116]: 00:00:04.473 [INFO]-CORE----: [settings] Saved NetworkInfo {rloc:0xe000, extaddr:ae12db553a8f7115, role:leader, mode:0x0f, version:3,
keyseq:0x0, ...
otbr-agent[14116]: 00:00:04.474 [INFO]-CORE----: [settings] ... pid:0x6f7040fb, mlecntr:0x1fdd9, maccntr:0x1fbda, mliid:7c75ca665c72a43b}
otbr-agent[14116]: 00:00:04.474 [INFO]-MLE-----: [announce-sender] Started
otbr-agent[14116]: 00:00:04.480 [INFO]-MESH-CP-: Border Agent start listening on port 0
otbr-agent[14116]: 00:00:04.481 [INFO]-BR------: Border Routing manager started
otbr-agent[14116]: 00:00:04.481 [INFO]-BR------: Start Router Solicitation, scheduled in 803 milliseconds
otbr-agent[14116]: 00:00:04.481 [INFO]-BR------: Start evaluating routing policy, scheduled in 162 milliseconds
otbr-agent[14116]: 00:00:04.481 [INFO]-N-DATA--: Publisher: DNS/SRP service (state:ToAdd) in netdata - total:0, preferred:0, desired:2
otbr-agent[14116]: 00:00:04.481 [INFO]-N-DATA--: Publisher: DNS/SRP service - State: ToAdd -> Adding
otbr-agent[14116]: 00:00:04.482 [INFO]-N-DATA--: Publisher: DNS/SRP service (state:Adding) - update in 2270 msec
otbr-agent[14116]: [INFO]-AGENT---: Thread is up
otbr-agent[14116]: [INFO]-AGENT---: Stop Thread Border Agent
otbr-agent[14116]: [INFO]-ADPROXY-: Stopped
otbr-agent[14116]: [INFO]-ADPROXY-: Started
otbr-agent[14116]: [INFO]-MDNS----: Avahi client state changed to 2.
otbr-agent[14116]: [INFO]-MDNS----: Avahi client ready.
otbr-agent[14116]: [INFO]-AGENT---: Publish meshcop service OpenThread Border Router._meshcop._udp.local.
otbr-agent[14116]: [INFO]-MDNS----: Avahi group change to state 0.
otbr-agent[14116]: [ERR ]-MDNS----: Group ready.
otbr-agent[14116]: [INFO]-MDNS----: Create service OpenThread Border Router._meshcop._udp for host localhost
otbr-agent[14116]: [INFO]-MDNS----: Commit service OpenThread Border Router._meshcop._udp
otbr-agent[14116]: [INFO]-ADPROXY-: Publish all hosts and services
otbr-agent[14116]: [INFO]-AGENT---: Start Thread Border Agent: OK
otbr-agent[14116]: 00:00:04.683 [NOTE]-PLAT----: [netif] ADD [U] fd5d:e08d:c5ec:42fc:0:ff:fe00:e000 (already subscribed, ignored)
otbr-agent[14116]: 00:00:04.684 [WARN]-PLAT----: [netif] Unexpected address type (6).
otbr-agent[14116]: 00:00:04.684 [WARN]-PLAT----: [netif] Unexpected address type (8).
otbr-agent[14116]: 00:00:04.695 [WARN]-PLAT----: [netif] Failed to transmit, error:Drop
otbr-agent[14116]: 00:00:04.695 [INFO]-PLAT----: [netif] Succeeded to process request#5
otbr-agent[14116]: 00:00:04.697 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::2
otbr-agent[14116]: 00:00:04.697 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::2
otbr-agent[14116]: 00:00:04.697 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::1:3
otbr-agent[14116]: 00:00:04.701 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff33:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:04.701 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff32:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:04.701 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::fc
otbr-agent[14116]: 00:00:04.706 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::1
otbr-agent[14116]: 00:00:04.707 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::16
otbr-agent[14116]: [INFO]-MDNS----: Avahi group change to state 1.
otbr-agent[14116]: [ERR ]-MDNS----: Group ready.
otbr-agent[14116]: 00:00:04.710 [INFO]-BR------: Evaluating routing policy
otbr-agent[14116]: 00:00:04.716 [INFO]-BR------: EvaluateOmrPrefix: No valid OMR prefixes found in Thread network
otbr-agent[14116]: 00:00:04.720 [INFO]-N-DATA--: Sent server data notification
otbr-agent[14116]: 00:00:04.720 [INFO]-BR------: Published local OMR prefix fd7a:10e5:333a:5b12::/64 in Thread network
otbr-agent[14116]: 00:00:04.727 [INFO]-BR------: Send OMR prefix fd7a:10e5:333a:5b12::/64 in RIO (valid lifetime = 1800 seconds)
otbr-agent[14116]: 00:00:04.729 [INFO]-BR------: Sent Router Advertisement on interface 7
otbr-agent[14116]: 00:00:04.730 [INFO]-BR------: Router advertisement scheduled in 16 seconds
otbr-agent[14116]: 00:00:04.731 [WARN]-PLAT----: [netif] Failed to transmit, error:Drop
otbr-agent[14116]: 00:00:04.737 [NOTE]-PLAT----: [netif] ADD [U] fd5d:e08d:c5ec:42fc:0:ff:fe00:fc00 (already subscribed, ignored)
otbr-agent[14116]: 00:00:04.737 [WARN]-PLAT----: [netif] Unexpected address type (6).
otbr-agent[14116]: 00:00:04.737 [WARN]-PLAT----: [netif] Unexpected address type (8).
otbr-agent[14116]: 00:00:04.740 [INFO]-N-DATA--: Received network data registration
otbr-agent[14116]: 00:00:04.741 [INFO]-N-DATA--: Allocated Context ID = 1
otbr-agent[14116]: 00:00:04.742 [INFO]-N-DATA--: Sent network data registration acknowledgment
otbr-agent[14116]: 00:00:04.743 [INFO]-BR------: Received Router Advertisement from fe80:0:0:0:ac12:db55:3a8f:7115 on interface 7
otbr-agent[14116]: 00:00:04.763 [INFO]-PLAT----: [netif] Succeeded to process request#6
otbr-agent[14116]: 00:00:04.772 [INFO]-CORE----: Notifier: StateChanged (0x00000200) [NetData]
otbr-agent[14116]: 00:00:04.772 [INFO]-MLE-----: Send Data Response (ff02:0:0:0:0:0:0:1)
otbr-agent[14116]: 00:00:04.772 [INFO]-BBR-----: PBBR state: None
otbr-agent[14116]: 00:00:04.773 [INFO]-BBR-----: Domain Prefix: ::/0, state: None
otbr-agent[14116]: 00:00:04.773 [INFO]-CORE----: [settings] Read SlaacIidSecretKey
otbr-agent[14116]: 00:00:04.773 [INFO]-UTIL----: SLAAC: Adding address fd7a:10e5:333a:5b12:572a:d02a:e7fb:a8ec
otbr-agent[14116]: 00:00:04.792 [INFO]-PLAT----: [netif] Sent request#7 to add fd7a:10e5:333a:5b12:572a:d02a:e7fb:a8ec/64
otbr-agent[14116]: 00:00:04.793 [INFO]-BR------: Start evaluating routing policy, scheduled in 191 milliseconds
otbr-agent[14116]: 00:00:04.793 [INFO]-N-DATA--: Publisher: DNS/SRP service (state:Adding) in netdata - total:0, preferred:0, desired:2
otbr-agent[14116]: 00:00:04.799 [INFO]-MAC-----: Sent IPv6 UDP msg, len:96, chksum:bf39, to:0xffff, sec:no, prio:net, radio:all
otbr-agent[14116]: 00:00:04.799 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:00:04.799 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:1]:19788
otbr-agent[14116]: 00:00:04.802 [INFO]-CORE----: Notifier: StateChanged (0x00000001) [Ip6+]
otbr-agent[14116]: 00:00:04.818 [WARN]-PLAT----: [netif] Failed to transmit, error:Drop
otbr-agent[14116]: 00:00:04.819 [NOTE]-PLAT----: [netif] ADD [U] fd7a:10e5:333a:5b12:572a:d02a:e7fb:a8ec (already subscribed, ignored)
otbr-agent[14116]: 00:00:04.819 [WARN]-PLAT----: [netif] Unexpected address type (6).
otbr-agent[14116]: 00:00:04.822 [WARN]-PLAT----: [netif] Unexpected address type (8).
otbr-agent[14116]: 00:00:04.839 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::2
otbr-agent[14116]: 00:00:04.840 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::2
otbr-agent[14116]: 00:00:04.841 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::1:3
otbr-agent[14116]: 00:00:04.842 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff33:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:04.843 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff32:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:04.848 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::fc
otbr-agent[14116]: 00:00:04.849 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::1
otbr-agent[14116]: 00:00:04.849 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::16
otbr-agent[14116]: 00:00:04.852 [INFO]-PLAT----: [netif] Succeeded to process request#7
otbr-agent[14116]: 00:00:04.872 [INFO]-MAC-----: Sent IPv6 UDP msg, len:118, chksum:b222, to:0xffff, sec:no, prio:net, radio:all
otbr-agent[14116]: 00:00:04.872 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:00:04.872 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:1]:19788
otbr-agent[14116]: 00:00:05.207 [INFO]-BR------: Evaluating routing policy
otbr-agent[14116]: 00:00:05.208 [INFO]-BR------: Send OMR prefix fd7a:10e5:333a:5b12::/64 in RIO (valid lifetime = 1800 seconds)
otbr-agent[14116]: 00:00:05.210 [INFO]-BR------: Sent Router Advertisement on interface 7
otbr-agent[14116]: 00:00:05.210 [INFO]-BR------: Router advertisement scheduled in 16 seconds
otbr-agent[14116]: 00:00:05.211 [INFO]-BR------: Received Router Advertisement from fe80:0:0:0:ac12:db55:3a8f:7115 on interface 7
otbr-agent[14116]: 00:00:05.284 [INFO]-BR------: Router solicitation times out
otbr-agent[14116]: 00:00:05.381 [INFO]-MLE-----: Send Advertisement (ff02:0:0:0:0:0:0:1)
otbr-agent[14116]: 00:00:05.399 [INFO]-MAC-----: Sent IPv6 UDP msg, len:90, chksum:83f4, to:0xffff, sec:no, prio:net, radio:all
otbr-agent[14116]: 00:00:05.405 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:00:05.409 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:1]:19788
otbr-agent[14116]: 00:00:05.540 [WARN]-PLAT----: [netif] Failed to transmit, error:Drop
otbr-agent[14116]: 00:00:05.558 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::2
otbr-agent[14116]: 00:00:05.573 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::2
otbr-agent[14116]: 00:00:05.573 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::1:3
otbr-agent[14116]: 00:00:05.573 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff33:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:05.574 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff32:40:fd5d:e08d:c5ec:42fc:0:1
otbr-agent[14116]: 00:00:05.580 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::fc
otbr-agent[14116]: 00:00:05.580 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff03::1
otbr-agent[14116]: 00:00:05.581 [NOTE]-PLAT----: [netif] Will not subscribe duplicate multicast address ff02::16
.
.
.
.
otbr-agent[14116]: 00:34:30.334 [INFO]-MAC-----: Sent IPv6 UDP msg, len:90, chksum:a5b1, to:0xffff, sec:no, prio:net, radio:all
otbr-agent[14116]: 00:34:30.335 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:34:30.338 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:1]:19788
otbr-agent[14116]: 00:34:34.259 [INFO]-MLE-----: Send Announce on channel 21
otbr-agent[14116]: 00:34:34.281 [INFO]-MAC-----: Sent IPv6 UDP msg, len:83, chksum:9a63, to:0xffff, sec:yes, prio:net, radio:all
otbr-agent[14116]: 00:34:34.282 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:34:34.282 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:1]:19788
otbr-agent[14116]: 00:34:55.946 [INFO]-MLE-----: Send Announce on channel 22
otbr-agent[14116]: 00:34:55.971 [INFO]-MAC-----: Sent IPv6 UDP msg, len:83, chksum:3dc6, to:0xffff, sec:yes, prio:net, radio:all
otbr-agent[14116]: 00:34:55.972 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:34:55.972 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:1]:19788
otbr-agent[14116]: 00:35:02.159 [WARN]-PLAT----: [netif] Failed to transmit, error:Drop
otbr-agent[14116]: 00:35:12.789 [INFO]-MLE-----: Send Advertisement (ff02:0:0:0:0:0:0:1)
otbr-agent[14116]: 00:35:12.807 [INFO]-MAC-----: Sent IPv6 UDP msg, len:90, chksum:daa6, to:0xffff, sec:no, prio:net, radio:all
otbr-agent[14116]: 00:35:12.814 [INFO]-MAC-----: src:[fe80:0:0:0:ac12:db55:3a8f:7115]:19788
otbr-agent[14116]: 00:35:12.814 [INFO]-MAC-----: dst:[ff02:0:0:0:0:0:0:1]:19788
otbr-agent[14116]: 00:35:17.734 [INFO]-MLE-----: Send Announce on channel 23
otbr-agent[14116]: 00:35:22.736 [WARN]-PLAT----: radio tx timeout
otbr-agent[14116]: 00:35:22.736 [CRIT]-PLAT----: HandleRcpTimeout() at /usr/src/debug/otbr/git-r0/ot-br-posix/third_party/openthread/repo/src/lib/spinel
/radio_spinel_impl.hpp:2218: RadioSpinelNoResponse
Looks like the prints are coming from the OTBR application. The reason for the bug is a problem in communications between your OTBR app and the RCP.

pika `pop from an empty queue`

I'm using pika in a kubernetes cluster and consuming messages from a queue, which triggers initiating a function in a new thread. However RabbitMQ seems crash, these are the best logs I've found so far:
2020-12-23 10:39:10,906] WARNING - WRITE indicated on fd=9, but writer callback is None; events=0b100 {/usr/local/lib/python3.9/site-packages/pika/adapters/utils/selector_ioloop_adapter.py:393}
(repeats to a total of n=38 times)
2020-12-23 10:39:10,908] ERROR - _AsyncBaseTransport._produce() failed, aborting connection: error=IndexError('pop from an empty deque'); sock=<socket.socket fd=9, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.100.200', 44892), raddr=('192.168.101.201', 5672)>; Caller's stack:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/pika/adapters/utils/io_services_utils.py", line 1097, in _on_socket_writable
self._produce()
File "/usr/local/lib/python3.9/site-packages/pika/adapters/utils/io_services_utils.py", line 822, in _produce
chunk = self._tx_buffers.popleft()
IndexError: pop from an empty deque
{/usr/local/lib/python3.9/site-packages/pika/adapters/utils/io_services_utils.py:1103}
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/pika/adapters/utils/io_services_utils.py", line 1097, in _on_socket_writable
self._produce()
File "/usr/local/lib/python3.9/site-packages/pika/adapters/utils/io_services_utils.py", line 822, in _produce
chunk = self._tx_buffers.popleft()
IndexError: pop from an empty deque
2020-12-23 10:39:10,908] INFO - _AsyncTransportBase._initate_abort(): Initiating abrupt asynchronous transport shutdown: state=1; error=IndexError('pop from an empty deque'); <socket.socket fd=9, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.100.200', 44892), raddr=('192.168.101.201', 5672)> {/usr/local/lib/python3.9/site-packages/pika/adapters/utils/io_services_utils.py:904}
2020-12-23 10:39:10,908] INFO - Deactivating transport: state=1; <socket.socket fd=9, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.100.200', 44892), raddr=('192.168.101.201', 5672)> {/usr/local/lib/python3.9/site-packages/pika/adapters/utils/io_services_utils.py:869}
2020-12-23 10:39:10,909] ERROR - connection_lost: StreamLostError: ("Stream connection lost: IndexError('pop from an empty deque')",) {/usr/local/lib/python3.9/site-packages/pika/adapters/base_connection.py:428}
2020-12-23 10:39:10,909] INFO - AMQP stack terminated, failed to connect, or aborted: opened=True, error-arg=StreamLostError: ("Stream connection lost: IndexError('pop from an empty deque')",); pending-error=None {/usr/local/lib/python3.9/site-packages/pika/connection.py:1996}
2020-12-23 10:39:10,909] INFO - Stack terminated due to StreamLostError: ("Stream connection lost: IndexError('pop from an empty deque')",) {/usr/local/lib/python3.9/site-packages/pika/connection.py:2065}
2020-12-23 10:39:10,909] INFO - Closing transport socket and unlinking: state=2; <socket.socket fd=9, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.100.200', 44892), raddr=('192.168.101.201', 5672)> {/usr/local/lib/python3.9/site-packages/pika/adapters/utils/io_services_utils.py:882}
2020-12-23 10:39:10,909] ERROR - Unexpected connection close detected: StreamLostError: ("Stream connection lost: IndexError('pop from an empty deque')",) {/usr/local/lib/python3.9/site-packages/pika/adapters/blocking_connection.py:520}
2020-12-23 10:39:31,416] INFO - Pika version 1.1.0 connecting to ('192.168.101.201', 5672) {/usr/local/lib/python3.9/site-packages/pika/adapters/utils/connection_workflow.py:179}
2020-12-23 10:39:31,417] INFO - Socket connected: <socket.socket fd=9, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.100.200', 47142), raddr=('192.168.101.201', 5672)> {/usr/local/lib/python3.9/site-packages/pika/adapters/utils/io_services_utils.py:345}
2020-12-23 10:39:31,418] INFO - Streaming transport linked up: (<pika.adapters.utils.io_services_utils._AsyncPlaintextTransport object at 0x7f81b3099a60>, _StreamingProtocolShim: <SelectConnection PROTOCOL transport=<pika.adapters.utils.io_services_utils._AsyncPlaintextTransport object at 0x7f81b3099a60> params=<ConnectionParameters host=rabbitmq-0.rabbitmq.testing.svc.cluster.local port=5672 virtual_host=/ ssl=False>>). {/usr/local/lib/python3.9/site-packages/pika/adapters/utils/connection_workflow.py:428}
2020-12-23 10:39:31,421] INFO - AMQPConnector - reporting success: <SelectConnection OPEN transport=<pika.adapters.utils.io_services_utils._AsyncPlaintextTransport object at 0x7f81b3099a60> params=<ConnectionParameters host=rabbitmq-0.rabbitmq.testing.svc.cluster.local port=5672 virtual_host=/ ssl=False>> {/usr/local/lib/python3.9/site-packages/pika/adapters/utils/connection_workflow.py:293}
2020-12-23 10:39:31,421] INFO - AMQPConnectionWorkflow - reporting success: <SelectConnection OPEN transport=<pika.adapters.utils.io_services_utils._AsyncPlaintextTransport object at 0x7f81b3099a60> params=<ConnectionParameters host=rabbitmq-0.rabbitmq.testing.svc.cluster.local port=5672 virtual_host=/ ssl=False>> {/usr/local/lib/python3.9/site-packages/pika/adapters/utils/connection_workflow.py:725}
2020-12-23 10:39:31,421] INFO - Connection workflow succeeded: <SelectConnection OPEN transport=<pika.adapters.utils.io_services_utils._AsyncPlaintextTransport object at 0x7f81b3099a60> params=<ConnectionParameters host=rabbitmq-0.rabbitmq.testing.svc.cluster.local port=5672 virtual_host=/ ssl=False>> {/usr/local/lib/python3.9/site-packages/pika/adapters/blocking_connection.py:452}
2020-12-23 10:39:31,422] INFO - Created channel=1 {/usr/local/lib/python3.9/site-packages/pika/adapters/blocking_connection.py:1247
}
My consumer has the following definition:
def publish_message(channel, message):
channel.basic_publish(exchange='',
routing_key='my_queue',
body=message)
def connect_to_mq():
credentials = pika.PlainCredentials(rabbit_user, rabbit_password)
parameters = pika.ConnectionParameters(rabbit_host, rabbit_port, '/', credentials)
connection = pika.BlockingConnection(parameters=parameters)
channel = connection.channel()
channel.queue_declare(queue='my_queue')
return connection, channel
def on_message(channel, method_frame, header_frame, body):
message = body.decode('utf-8')
if message == 'do_work':
thread = threading.Thread(target=start_processing, args=(channel,))
thread.start()
publish_message(channel, 'initiated thread')
def start_processing(channel):
publish_message(channel, 'starting...')
time.sleep(240)
publish_message(channel, 'processing complete!')
def main():
connection, channel = connect_to_mq()
channel.basic_consume(queue='my_queue',
auto_ack=True,
on_message_callback=on_message)
channel.start_consuming()
Is there anything inherently wrong with my implementation and strategy for handling messages and workloads in separate threads that is causing this to happen?
Pika isn't thread safe by default. You should ideally keep one connection per thread.
There are a bunch of example implementation here, and I have a thread safe rpc example here that you look at as well, but I would recommend using one of their reference implementations for threading.

Spring AMQP (RabbitMQ) Throwing Channel Shutdown Error

I am trying to do channel.basicReject() to requeue message based on some condition by creating an MethodInterceptor ConsumerAdvice and adding it to SMLC factor.setAdviceChain(new ConsumerAdvice()). I also have concurrentConsumer configuration which is set to 10. The moment my reject condition is met I issue basicReject command and it gets redelivered and processed by another consumer. During this redelivery process I get the below error,
2019-11-07 17:34:13.268 ERROR 29385 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - unknown delivery tag 1, class-id=60, method-id=80)
2019-11-07 17:34:13.268 DEBUG 29385 --- [ool-2-thread-13] o.s.a.r.listener.BlockingQueueConsumer : Received shutdown signal for consumer tag=amq.ctag-HUaN71TZUqMfLDR7k6LwGQ
com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - unknown delivery tag 1, class-id=60, method-id=80)
at com.rabbitmq.client.impl.ChannelN.asyncShutdown(ChannelN.java:516)
at com.rabbitmq.client.impl.ChannelN.processAsync(ChannelN.java:346)
at com.rabbitmq.client.impl.AMQChannel.handleCompleteInboundCommand(AMQChannel.java:178)
at com.rabbitmq.client.impl.AMQChannel.handleFrame(AMQChannel.java:111)
at com.rabbitmq.client.impl.AMQConnection.readFrame(AMQConnection.java:670)
at com.rabbitmq.client.impl.AMQConnection.access$300(AMQConnection.java:48)
at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:597)
at java.lang.Thread.run(Thread.java:748)
My message is not getting lost but I am seeing a bunch of above errors and unable to understand why this is happening. If anyone has any clues please guide me.
Below are the trace logs,
2019-11-08 02:11:31.883 TRACE 8695 --- [askExecutor-138] o.s.a.r.c.CachingConnectionFactory : AMQChannel(amqp://guest#127.0.0.1:5672/,99) channel.getChannelNumber()
2019-11-08 02:11:31.883 INFO 8695 --- [askExecutor-138] c.g.s.w.consumer.advice.ArgumentUtils : Channel number before triggering redelivery : 99
2019-11-08 02:11:31.883 TRACE 8695 --- [askExecutor-138] o.s.a.r.c.CachingConnectionFactory : AMQChannel(amqp://guest#127.0.0.1:5672/,99) channel.basicReject([2, true])
2019-11-08 02:11:31.883 INFO 8695 --- [askExecutor-138] c.g.s.w.consumer.advice.ArgumentUtils : ==============================================================================
2019-11-08 02:11:31.883 INFO 8695 --- [askExecutor-138] c.g.s.w.consumer.advice.ConsumerAdvice : Requeue Message attempted, status : true
2019-11-08 02:11:31.884 TRACE 8695 --- [askExecutor-138] o.s.a.r.l.SimpleMessageListenerContainer : Waiting for message from consumer.
2019-11-08 02:11:31.884 TRACE 8695 --- [askExecutor-138] o.s.a.r.listener.BlockingQueueConsumer : Retrieving delivery for Consumer#7783912f: tags=[[amq.ctag-eY7LN-1pSXPX8FKRBgt-ug]], channel=Cached Rabbit Channel: AMQChannel(amqp://guest#127.0.0.1:5672/,99), conn: Proxy#37ffe4f3 Shared Rabbit Connection: SimpleConnection#708dfe10 [delegate=amqp://guest#127.0.0.1:5672/, localPort= 58638], acknowledgeMode=AUTO local queue size=0
2019-11-08 02:11:31.884 DEBUG 8695 --- [askExecutor-138] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it
com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - unknown delivery tag 1, class-id=60, method-id=80)
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.checkShutdown(BlockingQueueConsumer.java:436)
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.nextMessage(BlockingQueueConsumer.java:501)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doReceiveAndExecute(SimpleMessageListenerContainer.java:843)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.receiveAndExecute(SimpleMessageListenerContainer.java:832)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$700(SimpleMessageListenerContainer.java:78)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1073)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - unknown delivery tag 1, class-id=60, method-id=80)
at com.rabbitmq.client.impl.ChannelN.asyncShutdown(ChannelN.java:516)
at com.rabbitmq.client.impl.ChannelN.processAsync(ChannelN.java:346)
at com.rabbitmq.client.impl.AMQChannel.handleCompleteInboundCommand(AMQChannel.java:178)
at com.rabbitmq.client.impl.AMQChannel.handleFrame(AMQChannel.java:111)
at com.rabbitmq.client.impl.AMQConnection.readFrame(AMQConnection.java:670)
at com.rabbitmq.client.impl.AMQConnection.access$300(AMQConnection.java:48)
at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:597)
... 1 common frames omitted
2019-11-08 02:11:31.884 ERROR 8695 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - unknown delivery tag 2, class-id=60, method-id=90)
You need to show your code and configuration.
It seems like the SMLC has its default configuration to automatically acknowledge messages and this failure is because you already rejected it; why are you interacting with the channel directly?
You can simply throw an exception and the container will reject the message on your behalf.
I don't know if it will helpful for someone.
I had the same error because of invalid input data. But when I added next properties:
"rabbit.listener.acknowledgeMode": "MANUAL",
"rabbit.listener.defaultRequeueRejected": "true",
"rabbit.listener.prefetchCount": "1",
the problem stop to break my program but only had stopping my listener

nginx stream_ssl_preread module unable to read ssl_preread_server_name

I am trying to set up nginx to map TLS connections to different backends based on the SNI server name. From what I can tell, my client is sending the server name, but the preread module is only reading a hyphen.
Here is my nginx congif:
stream {
map_hash_bucket_size 64;
############################################################
### logging
log_format log_stream '$remote_addr [$time_local] $protocol [$ssl_preread_server_name] [$ssl_preread_alpn_protocols] [$instanceport] '
'$status $bytes_sent $bytes_received $session_time';
error_log /usr/home/glance/Logs/pservernginx.error.log info;
access_log /usr/home/glance/Logs/pservernginx.access.log log_stream;
############################################################
### ssl configuration
ssl_certificate /usr/home/glance/GlanceReleases/star.myglance.org.pem;
ssl_certificate_key /usr/home/glance/GlanceReleases/star.myglance.org.pem;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5:!RC4;
limit_conn_zone $binary_remote_addr zone=ip_addr:10m;
########################################################################
### Raw TLS PServer Connections
### Listen for TLS on 5501 and forward to TCP sock 6500 (socket port)
### https://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html
map $ssl_preread_server_name $instanceport {
presence.myglance.org 6500;
presence-1.myglance.org 6501;
presence-2.myglance.org 6502;
default glance-no-upstream-instance-configured;
}
server {
listen 5501 ssl;
ssl_preread on;
proxy_connect_timeout 20s; # max time to connect to pserver
proxy_timeout 30s; # max time between successive reads or writes
proxy_pass 127.0.0.1:$instanceport;
}
}
wireshark shows the Server Name header:
The nginx access log shows only hyphens for the preread variables:
108.49.96.66 [12/Apr/2019:11:50:58 +0000] TCP [-] [-] [glance-no-upstream-instance-configured] 500 0 0 0.066
I'm running nginx 1.14.2 on FreeBSD. How can I debug what is happening in the preread module?
================ UPDATE ===============
Turned on debug logging. Maybe "ssl preread: not a handshake" is a clue.
2019/04/12 14:49:50 [info] 61420#0: *9 client 108.49.96.66:54740 connected to 0.0.0.0:5501
2019/04/12 14:49:50 [debug] 61420#0: *9 posix_memalign: 0000000801C35000:256 #16
2019/04/12 14:49:50 [debug] 61419#0: accept on 0.0.0.0:5501, ready: 1
2019/04/12 14:49:50 [debug] 61419#0: accept() not ready (35: Resource temporarily unavailable)
2019/04/12 14:49:50 [debug] 61420#0: *9 posix_memalign: 0000000801C35600:256 #16
2019/04/12 14:49:50 [debug] 61420#0: *9 generic phase: 0
2019/04/12 14:49:50 [debug] 61420#0: *9 generic phase: 1
2019/04/12 14:49:50 [debug] 61420#0: *9 generic phase: 2
2019/04/12 14:49:50 [debug] 61420#0: *9 tcp_nodelay
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL_do_handshake: -1
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL_get_error: 2
2019/04/12 14:49:50 [debug] 61420#0: *9 kevent set event: 5: ft:-1 fl:0025
2019/04/12 14:49:50 [debug] 61420#0: *9 event timer add: 5: 60000:29203481224
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL handshake handler: 0
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL_do_handshake: 1
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD"
2019/04/12 14:49:50 [debug] 61420#0: *9 event timer del: 5: 29203481224
2019/04/12 14:49:50 [debug] 61420#0: *9 generic phase: 2
2019/04/12 14:49:50 [debug] 61420#0: *9 ssl preread handler
2019/04/12 14:49:50 [debug] 61420#0: *9 malloc: 0000000801CFF000:16384
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL_read: -1
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL_get_error: 2
2019/04/12 14:49:50 [debug] 61420#0: *9 ssl preread handler
2019/04/12 14:49:50 [debug] 61420#0: *9 posix_memalign: 0000000801C35900:256 #16
2019/04/12 14:49:50 [debug] 61420#0: *9 event timer add: 5: 30000:29203451252
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL_read: 81
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL_read: -1
2019/04/12 14:49:50 [debug] 61420#0: *9 SSL_get_error: 2
2019/04/12 14:49:50 [debug] 61420#0: *9 ssl preread handler
2019/04/12 14:49:50 [debug] 61420#0: *9 ssl preread: not a handshake
2019/04/12 14:49:50 [debug] 61420#0: *9 event timer del: 5: 29203451252
2019/04/12 14:49:50 [debug] 61420#0: *9 proxy connection handler
2019/04/12 14:49:50 [debug] 61420#0: *9 malloc: 0000000801DF7000:400
2019/04/12 14:49:50 [debug] 61420#0: *9 malloc: 0000000801CD9000:16384
2019/04/12 14:49:50 [debug] 61420#0: *9 stream map started
2019/04/12 14:49:50 [debug] 61420#0: *9 stream map: "" "glance-no-upstream-instance-configured"
================= UPDATE 2 ======================
I tested using
openssl s_client -connect ... -servername ...
instead of my client. Now it appears that the preread module is blocked waiting for data for 30 seconds (error code 2 is WANT_READ):
2019/04/23 13:04:30 [debug] 61419#0: *12844 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD"
2019/04/23 13:04:30 [debug] 61419#0: *12844 event timer del: 3: 30147561850
2019/04/23 13:04:30 [debug] 61419#0: *12844 generic phase: 2
2019/04/23 13:04:30 [debug] 61419#0: *12844 ssl preread handler
2019/04/23 13:04:30 [debug] 61419#0: *12844 malloc: 0000000801CA6140:16384
2019/04/23 13:04:30 [debug] 61419#0: *12844 SSL_read: -1
2019/04/23 13:04:30 [debug] 61419#0: *12844 SSL_get_error: 2
2019/04/23 13:04:30 [debug] 61419#0: *12844 ssl preread handler
2019/04/23 13:04:30 [debug] 61419#0: *12844 posix_memalign: 0000000801DB3400:256 #16
2019/04/23 13:04:30 [debug] 61419#0: *12844 event timer add: 3: 30000:30147531898
2019/04/23 13:05:00 [debug] 61419#0: *12844 event timer del: 3: 30147531898
2019/04/23 13:05:00 [debug] 61419#0: *12844 finalize stream session: 200
2019/04/23 13:05:00 [debug] 61419#0: *12844 stream log handler
2019/04/23 13:05:00 [debug] 61419#0: *12844 stream map started
2019/04/23 13:05:00 [debug] 61419#0: *12844 stream script var: ""
I found the problem:
listen 5501 **ssl**;
ssl_preread on;
ssl in the listen directive caused that nginx server to do the ssl handshake. By the time the preread module was notified, the handshake bytes had already been consumed, which is all consistent with the behavior I was seeing. In my case, I still want nginx to offload the encryption. So I created a set of nginx server directives to terminate the ssl connection before passing to my back end.
This is the relevant portion of my nginx config after fixing it. Note that the last server directive (the one that uses ssl_preread) does not terminate the SSL connection.
########################################################################
### TLS Connections
### Listen for TLS on 5501 and forward to TCP sock 6500 (socket port)
### https://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html
map $ssl_preread_server_name $instanceport {
presence.myglance.org 5502;
presence-1.myglance.org 5503;
presence-2.myglance.org 5504;
default glance-no-upstream-instance-configured;
}
server {
listen 5502 ssl;
ssl_preread off;
proxy_pass 127.0.0.1:6502;
}
server {
listen 5503 ssl;
ssl_preread off;
proxy_pass 127.0.0.1:6503;
}
server {
listen 5504 ssl;
ssl_preread off;
proxy_pass 127.0.0.1:6504;
}
server {
listen 5501;
ssl_preread on;
proxy_connect_timeout 20s; # max time to connect to pserver
proxy_timeout 30s; # max time between successive reads or writes
proxy_pass 127.0.0.1:$instanceport;
}
In case you need to use ssl in listen directive, you can simply use $ssl_server_name in the map block instead of $ssl_preread_server_name