Kurento recording file size is 0, why - webrtc

Why the recording file size is 0.
-rw-r--r-- 1 kurento kurento 0 May 2 02:27 recorder1.webm
-rw-r--r-- 1 kurento kurento 0 May 2 02:27 recorder2.webm
Audio streams are as follows.
           ------>recorderEndpointA
          |
Peer A <--------->RtpEndpointA<==>RtpEndpointB<---------->Peer B
                      |
                       --->recorderEndpointB
The procedure is as follows.
- Create mediapipeline
- Create two RtpEndpoints
- RtpEndpointA connects RtpEndpointB
- RtpEndpointB connects RtpEndpointA
- Create two RecorderEndpoints
- RtpEndpointA connects RecorderEndpointA
- RtpEndpointB connects RecorderEndpointB
- RecorderEndpointA record
- RecorderEndpointB record
As a result, talk is normal, but recording file size is 0.
The log of server is as follows.
2017-05-02 02:27:32,241406 1872 [0x00007f60cd89d700] debug KurentoWebSocketTransport WebSocketTransport.cpp:422 processMessage() Message: >{"id":"7","jsonrpc":"2.0","method":"invoke","params":{"object":"ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/bec7d747-2772-49b4-8353-fe5346a15358_kurento.RtpEndpoint","operation":"connect","operationParams":{"sink":"ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/f7fa31b4-abdd-4da0-99f2-dcc590cc17fc_kurento.RtpEndpoint"},"sessionId":"7f751217-ef13-4a92-bf50-48a424bcdaba"}}
<
2017-05-02 02:27:32,242313 1872 [0x00007f60cd89d700] debug KurentoMediaElementImpl MediaElementImpl.cpp:867 connect() Connecting ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/bec7d747-2772-49b4-8353-fe5346a15358_kurento.RtpEndpoint -> ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/f7fa31b4-abdd-4da0-99f2-dcc590cc17fc_kurento.RtpEndpoint params AUDIO default default
2017-05-02 02:27:32,243147 1872 [0x00007f60cd89d700] debug KurentoMediaElementImpl MediaElementImpl.cpp:867 connect() Connecting ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/bec7d747-2772-49b4-8353-fe5346a15358_kurento.RtpEndpoint -> ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/f7fa31b4-abdd-4da0-99f2-dcc590cc17fc_kurento.RtpEndpoint params VIDEO default default
2017-05-02 02:27:32,244064 1872 [0x00007f60cd89d700] debug KurentoMediaElementImpl MediaElementImpl.cpp:867 connect() Connecting ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/bec7d747-2772-49b4-8353-fe5346a15358_kurento.RtpEndpoint -> ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/f7fa31b4-abdd-4da0-99f2-dcc590cc17fc_kurento.RtpEndpoint params DATA default default
2017-05-02 02:27:32,244852 1872 [0x00007f60cd89d700] debug KurentoWebSocketTransport WebSocketTransport.cpp:424 processMessage() Response: >{"id":"7","jsonrpc":"2.0","result":{"sessionId":"7f751217-ef13-4a92-bf50-48a424bcdaba","value":null}}
<
2017-05-02 02:27:33,241751 1872 [0x00007f60cd09c700] debug KurentoWebSocketTransport WebSocketTransport.cpp:422 processMessage() Message: >{"id":"8","jsonrpc":"2.0","method":"invoke","params":{"object":"ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/f7fa31b4-abdd-4da0-99f2-dcc590cc17fc_kurento.RtpEndpoint","operation":"connect","operationParams":{"sink":"ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/bec7d747-2772-49b4-8353-fe5346a15358_kurento.RtpEndpoint"},"sessionId":"7f751217-ef13-4a92-bf50-48a424bcdaba"}}
<
2017-05-02 02:27:33,242461 1872 [0x00007f60cd09c700] debug KurentoMediaElementImpl MediaElementImpl.cpp:867 connect() Connecting ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/f7fa31b4-abdd-4da0-99f2-dcc590cc17fc_kurento.RtpEndpoint -> ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/bec7d747-2772-49b4-8353-fe5346a15358_kurento.RtpEndpoint params AUDIO default default
2017-05-02 02:27:33,245165 1872 [0x00007f60cd09c700] debug KurentoMediaElementImpl MediaElementImpl.cpp:867 connect() Connecting ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/f7fa31b4-abdd-4da0-99f2-dcc590cc17fc_kurento.RtpEndpoint -> ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/bec7d747-2772-49b4-8353-fe5346a15358_kurento.RtpEndpoint params VIDEO default default
2017-05-02 02:27:33,246502 1872 [0x00007f60cd09c700] debug KurentoMediaElementImpl MediaElementImpl.cpp:867 connect() Connecting ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/f7fa31b4-abdd-4da0-99f2-dcc590cc17fc_kurento.RtpEndpoint -> ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/bec7d747-2772-49b4-8353-fe5346a15358_kurento.RtpEndpoint params DATA default default
2017-05-02 02:27:33,248380 1872 [0x00007f60cd09c700] debug KurentoWebSocketTransport WebSocketTransport.cpp:424 processMessage() Response: >{"id":"8","jsonrpc":"2.0","result":{"sessionId":"7f751217-ef13-4a92-bf50-48a424bcdaba","value":null}}
<
2017-05-02 02:27:33,257375 1872 [0x00007f60a9523700] debug KurentoMediaElementImpl MediaElementImpl.cpp:492 mediaFlowInStateChange() <kmsrtpendpoint32> Media Flowing IN in pad default with type audio
2017-05-02 02:27:34,244614 1872 [0x00007f60cd89d700] debug KurentoWebSocketTransport WebSocketTransport.cpp:422 processMessage() Message: >{"id":"9","jsonrpc":"2.0","method":"create","params":{"constructorParams":{"mediaPipeline":"ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline","profile":7,"stopOnEndOfStream":true,"uri":"file:///tmp/recorder1.webm"},"properties":null,"type":"RecorderEndpoint"}}
<
2017-05-02 02:27:34,248876 1872 [0x00007f60cd89d700] info KurentoRecorderEndpointImpl RecorderEndpointImpl.cpp:83 RecorderEndpointImpl() Set WEBM profile
2017-05-02 02:27:34,249882 1872 [0x00007f60cd89d700] debug KurentoWebSocketTransport WebSocketTransport.cpp:424 processMessage() Response: >{"id":"9","jsonrpc":"2.0","result":{"sessionId":"7f751217-ef13-4a92-bf50-48a424bcdaba","value":"ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/2dc7538e-f77b-44c0-90eb-4ea635298eb9_kurento.RecorderEndpoint"}}
<
2017-05-02 02:27:34,295687 1872 [0x00007f60a37fe700] debug KurentoMediaElementImpl MediaElementImpl.cpp:447 mediaFlowOutStateChange() <kmsrtpendpoint32> Media Flowing OUT in pad default with type audio
2017-05-02 02:27:34,297274 1872 [0x00007f60aa742700] debug KurentoMediaElementImpl MediaElementImpl.cpp:492 mediaFlowInStateChange() <kmsrtpendpoint33> Media Flowing IN in pad default with type audio
2017-05-02 02:27:35,244376 1872 [0x00007f60cd09c700] debug KurentoWebSocketTransport WebSocketTransport.cpp:422 processMessage() Message: >{"id":"10","jsonrpc":"2.0","method":"create","params":{"constructorParams":{"mediaPipeline":"ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline","profile":7,"stopOnEndOfStream":true,"uri":"file:///tmp/recorder2.webm"},"properties":null,"type":"RecorderEndpoint"}}
<
2017-05-02 02:27:35,250939 1872 [0x00007f60cd09c700] info KurentoRecorderEndpointImpl RecorderEndpointImpl.cpp:83 RecorderEndpointImpl() Set WEBM profile
2017-05-02 02:27:35,252356 1872 [0x00007f60cd09c700] debug KurentoWebSocketTransport WebSocketTransport.cpp:424 processMessage() Response: >{"id":"10","jsonrpc":"2.0","result":{"sessionId":"7f751217-ef13-4a92-bf50-48a424bcdaba","value":"ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/eba7c283-34d5-43a1-8cd2-5892ecaddb53_kurento.RecorderEndpoint"}}
<
2017-05-02 02:27:36,244544 1872 [0x00007f60cd89d700] debug KurentoWebSocketTransport WebSocketTransport.cpp:422 processMessage() Message: >{"id":"11","jsonrpc":"2.0","method":"invoke","params":{"object":"ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/bec7d747-2772-49b4-8353-fe5346a15358_kurento.RtpEndpoint","operation":"connect","operationParams":{"sink":"ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/2dc7538e-f77b-44c0-90eb-4ea635298eb9_kurento.RecorderEndpoint"},"sessionId":"7f751217-ef13-4a92-bf50-48a424bcdaba"}}
<
2017-05-02 02:27:36,247151 1872 [0x00007f60cd89d700] debug KurentoMediaElementImpl MediaElementImpl.cpp:867 connect() Connecting ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/bec7d747-2772-49b4-8353-fe5346a15358_kurento.RtpEndpoint -> ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/2dc7538e-f77b-44c0-90eb-4ea635298eb9_kurento.RecorderEndpoint params AUDIO default default
2017-05-02 02:27:36,247927 1872 [0x00007f60cd89d700] debug KurentoMediaElementImpl MediaElementImpl.cpp:867 connect() Connecting ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/bec7d747-2772-49b4-8353-fe5346a15358_kurento.RtpEndpoint -> ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/2dc7538e-f77b-44c0-90eb-4ea635298eb9_kurento.RecorderEndpoint params VIDEO default default
2017-05-02 02:27:36,248536 1872 [0x00007f60cd89d700] debug KurentoMediaElementImpl MediaElementImpl.cpp:867 connect() Connecting ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/bec7d747-2772-49b4-8353-fe5346a15358_kurento.RtpEndpoint -> ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/2dc7538e-f77b-44c0-90eb-4ea635298eb9_kurento.RecorderEndpoint params DATA default default
2017-05-02 02:27:36,249189 1872 [0x00007f60cd89d700] debug KurentoWebSocketTransport WebSocketTransport.cpp:424 processMessage() Response: >{"id":"11","jsonrpc":"2.0","result":{"sessionId":"7f751217-ef13-4a92-bf50-48a424bcdaba","value":null}}
<
2017-05-02 02:27:37,244760 1872 [0x00007f60cd09c700] debug KurentoWebSocketTransport WebSocketTransport.cpp:422 processMessage() Message: >{"id":"12","jsonrpc":"2.0","method":"invoke","params":{"object":"ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/f7fa31b4-abdd-4da0-99f2-dcc590cc17fc_kurento.RtpEndpoint","operation":"connect","operationParams":{"sink":"ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/eba7c283-34d5-43a1-8cd2-5892ecaddb53_kurento.RecorderEndpoint"},"sessionId":"7f751217-ef13-4a92-bf50-48a424bcdaba"}}
<
2017-05-02 02:27:37,245968 1872 [0x00007f60cd09c700] debug KurentoMediaElementImpl MediaElementImpl.cpp:867 connect() Connecting ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/f7fa31b4-abdd-4da0-99f2-dcc590cc17fc_kurento.RtpEndpoint -> ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/eba7c283-34d5-43a1-8cd2-5892ecaddb53_kurento.RecorderEndpoint params AUDIO default default
2017-05-02 02:27:37,246712 1872 [0x00007f60cd09c700] debug KurentoMediaElementImpl MediaElementImpl.cpp:867 connect() Connecting ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/f7fa31b4-abdd-4da0-99f2-dcc590cc17fc_kurento.RtpEndpoint -> ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/eba7c283-34d5-43a1-8cd2-5892ecaddb53_kurento.RecorderEndpoint params VIDEO default default
2017-05-02 02:27:37,247314 1872 [0x00007f60cd09c700] debug KurentoMediaElementImpl MediaElementImpl.cpp:867 connect() Connecting ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/f7fa31b4-abdd-4da0-99f2-dcc590cc17fc_kurento.RtpEndpoint -> ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/eba7c283-34d5-43a1-8cd2-5892ecaddb53_kurento.RecorderEndpoint params DATA default default
2017-05-02 02:27:37,247905 1872 [0x00007f60cd09c700] debug KurentoWebSocketTransport WebSocketTransport.cpp:424 processMessage() Response: >{"id":"12","jsonrpc":"2.0","result":{"sessionId":"7f751217-ef13-4a92-bf50-48a424bcdaba","value":null}}
<
2017-05-02 02:27:38,245166 1872 [0x00007f60cd89d700] debug KurentoWebSocketTransport WebSocketTransport.cpp:422 processMessage() Message: >{"id":"13","jsonrpc":"2.0","method":"invoke","params":{"object":"ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/2dc7538e-f77b-44c0-90eb-4ea635298eb9_kurento.RecorderEndpoint","operation":"record","sessionId":"7f751217-ef13-4a92-bf50-48a424bcdaba"}}
<
2017-05-02 02:27:38,303222 1872 [0x00007f60cd89d700] debug KurentoWebSocketTransport WebSocketTransport.cpp:424 processMessage() Response: >{"id":"13","jsonrpc":"2.0","result":{"sessionId":"7f751217-ef13-4a92-bf50-48a424bcdaba","value":null}}
<
2017-05-02 02:27:38,312409 1872 [0x00007f60a07f8700] debug KurentoMediaElementImpl MediaElementImpl.cpp:492 mediaFlowInStateChange() <kmsrecorderendpoint28> Media Flowing IN in pad default with type audio
2017-05-02 02:27:39,245340 1872 [0x00007f60cd09c700] debug KurentoWebSocketTransport WebSocketTransport.cpp:422 processMessage() Message: >{"id":"14","jsonrpc":"2.0","method":"invoke","params":{"object":"ea2cfde5-7904-4991-bacb-f66440ef194b_kurento.MediaPipeline/eba7c283-34d5-43a1-8cd2-5892ecaddb53_kurento.RecorderEndpoint","operation":"record","sessionId":"7f751217-ef13-4a92-bf50-48a424bcdaba"}}
<
2017-05-02 02:27:39,294527 1872 [0x00007f60cd09c700] debug KurentoWebSocketTransport WebSocketTransport.cpp:424 processMessage() Response: >{"id":"14","jsonrpc":"2.0","result":{"sessionId":"7f751217-ef13-4a92-bf50-48a424bcdaba","value":null}}
<
2017-05-02 02:27:39,309717 1872 [0x00007f60a0ff9700] debug KurentoMediaElementImpl MediaElementImpl.cpp:492 mediaFlowInStateChange() <kmsrecorderendpoint29> Media Flowing IN in pad default with type audio

I'm posting an answer here in the case anyone else winds up here wondering what's going on. It is VERY IMPORTANT that the RecorderEndpoint is aware of what media container profile that is being used. My problem was that I had audio turned off on the front end media constraints while developing. The RecorderEndpoint needs to be aware of this by you passing in the correct MediaProfileSpecType.
this.recorder = new RecorderEndpoint.Builder(pipeline, RECORDING_PATH + roomName + '_' + name + RECORDING_EXT)
.withMediaProfile(MediaProfileSpecType.WEBM)
.build();
Simply turning back on audio on the front end fixed it, but you can also reference code contained in the other tutorials where the Kurento team has connecting according to some sort of media constraint container logic.

Thank you all. Kurento only as an audio media server. When I created the recordEndpointer, mediaProfile was not set. However, mediaProfile should be set to 'WEBM_AUDIO_ONLY'. Now it can work.

We've been using Kurento to power WebRTC video recording in Chrome at Pipe.
We've also run into the 0 byte video recordings issue. So far we've identified 2 causes:
TURN/STUN protocol is blocked by some VPNs like TunnelBear (because of this) thus no ICE candidates were ever found for the user. Our 1st beta versions did not account for that.
User was able to start recording without interacting with Chrome's Allow/Block camera permission dialog.

Related

Connnections handling

So I've been using karate for a while now, and there has been an issue we were facing since over the last year: org.apache.http.conn.ConnectTimeoutException
Other threads about that mentioned connectionTimeout exception were solvable by specifying proxy, but taht did not help us.
After tons of investigation, it turned out that our Azure SNAT was exhausted, meaning Karate was opening way too many connections.
To verify this I enabled log debugging and used this feature
Background:
* url "https://www.karatelabs.io/"
Scenario:
* method GET
* method GET
the logs then had following lines
13:10:17.868 [main] DEBUG com.intuit.karate - request:
1 > GET https://www.karatelabs.io/
1 > Host: www.karatelabs.io
1 > Connection: Keep-Alive
1 > User-Agent: Apache-HttpClient/4.5.13 (Java/17.0.4.1)
1 > Accept-Encoding: gzip,deflate
13:10:17.868 [main] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection request: [route: {s}->https://www.karatelabs.io:443][total available: 0; route allocated: 0 of 5; total allocated: 0 of 10]
13:10:17.874 [main] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection leased: [id: 0][route: {s}->https://www.karatelabs.io:443][total available: 0; route allocated: 1 of 5; total allocated: 1 of 10]
13:10:17.875 [main] DEBUG o.a.h.impl.execchain.MainClientExec - Opening connection {s}->https://www.karatelabs.io:443
13:10:17.883 [main] DEBUG o.a.h.i.c.DefaultHttpClientConnectionOperator - Connecting to www.karatelabs.io/34.149.87.45:443
13:10:17.883 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Connecting socket to www.karatelabs.io/34.149.87.45:443 with timeout 30000
13:10:17.924 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Enabled protocols: [TLSv1.3, TLSv1.2]
13:10:17.924 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Enabled cipher suites:[...]
13:10:17.924 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Starting handshake
13:10:18.012 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Secure session established
13:10:18.012 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - negotiated protocol: TLSv1.3
13:10:18.012 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - negotiated cipher suite: TLS_AES_256_GCM_SHA384
13:10:18.012 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - peer principal: CN=karatelabs.io
13:10:18.012 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - peer alternative names: [karatelabs.io, www.karatelabs.io]
13:10:18.012 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - issuer principal: CN=Sectigo RSA Domain Validation Secure Server CA, O=Sectigo Limited, L=Salford, ST=Greater Manchester, C=GB
13:10:18.014 [main] DEBUG o.a.h.i.c.DefaultHttpClientConnectionOperator - Connection established localIp<->serverIp
13:10:18.015 [main] DEBUG o.a.h.i.c.DefaultManagedHttpClientConnection - http-outgoing-0: set socket timeout to 120000
13:10:18.015 [main] DEBUG o.a.h.impl.execchain.MainClientExec - Executing request GET / HTTP/1.1
...
13:10:18.066 [main] DEBUG o.a.h.impl.execchain.MainClientExec - Connection can be kept alive indefinitely
...
...
13:10:18.196 [main] DEBUG com.intuit.karate - request:
2 > GET https://www.karatelabs.io/
13:10:18.196 [main] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection request: [route: {s}->https://www.karatelabs.io:443][total available: 0; route allocated: 0 of 5; total allocated: 0 of 10]
13:10:18.196 [main] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection leased: [id: 1][route: {s}->https://www.karatelabs.io:443][total available: 0; route allocated: 1 of 5; total allocated: 1 of 10]
13:10:18.196 [main] DEBUG o.a.h.impl.execchain.MainClientExec - Opening connection {s}->https://www.karatelabs.io:443
13:10:18.196 [main] DEBUG o.a.h.i.c.DefaultHttpClientConnectionOperator - Connecting to www.karatelabs.io/34.149.87.45:443
13:10:18.196 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Connecting socket to www.karatelabs.io/34.149.87.45:443 with timeout 30000
13:10:18.206 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Enabled protocols: [TLSv1.3, TLSv1.2]
13:10:18.206 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Enabled cipher suites:[...]
13:10:18.206 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Starting handshake
13:10:18.236 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Secure session established
13:10:18.236 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - negotiated protocol: TLSv1.3
13:10:18.236 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - negotiated cipher suite: TLS_AES_256_GCM_SHA384
13:10:18.236 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - peer principal: CN=karatelabs.io
13:10:18.236 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - peer alternative names: [karatelabs.io, www.karatelabs.io]
13:10:18.236 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - issuer principal: CN=Sectigo RSA Domain Validation Secure Server CA, O=Sectigo Limited, L=Salford, ST=Greater Manchester, C=GB
13:10:18.236 [main] DEBUG o.a.h.i.c.DefaultHttpClientConnectionOperator - Connection established localIp<->serverIp
13:10:18.236 [main] DEBUG o.a.h.i.c.DefaultManagedHttpClientConnection - http-outgoing-1: set socket timeout to 120000
...
13:10:18.279 [main] DEBUG o.a.h.impl.execchain.MainClientExec - Connection can be kept alive indefinitely
...
...
13:10:18.609 [Finalizer] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection manager is shutting down
13:10:18.610 [Finalizer] DEBUG o.a.h.i.c.DefaultManagedHttpClientConnection - http-outgoing-1: Shutdown connection
13:10:18.611 [Finalizer] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection manager shut down
13:10:18.612 [Finalizer] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection manager is shutting down
13:10:18.612 [Finalizer] DEBUG o.a.h.i.c.DefaultManagedHttpClientConnection - http-outgoing-2: Shutdown connection
13:10:18.612 [Finalizer] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection manager shut down
13:10:18.612 [Finalizer] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection manager is shutting down
"Connecting to socket" and "handshake" indicate that karate is establishing a new connection instead of using an already opened one, even though I am sending a request to the same host.
On the other hand, on longer scenarios, I was seeing "http-outgoing-x: Shutdown connection" after about ~1s from opening it, in the middle of the run, despite having "karate.configure('readTimeout', 120000)" specified.
I don't think that was intentional, especially after seeing the "keep-alive" header and the "Connection can be kept alive indefinitely" in the log"
That being said, is there any way to force karate to use the same connection instead of establishing a new one each request?
As far as we know, we use the Apache HTTP Client API the right way.
But you never know. The best thing is for you to dive into the code and see what we could be missing. Or you could provide a way to replicate following these instructions: https://github.com/karatelabs/karate/wiki/How-to-Submit-an-Issue

Karate - ERROR com.intuit.karate - java.net.SocketException: Connection reset, http call failed after xxx milliseconds [duplicate]

so my company has implemented Oauth2.0 on two different internal servers. when i try using karate to get the token back on the myldev server. i get it back with any issues. (with configure ssl = True)
But when i do the exact same call against the mylqa server. i get the following error
11:01:46.113 [main] DEBUG org.apache.http.impl.execchain.MainClientExec - Opening connection {s}-> private url
11:01:46.113 [main] DEBUG org.apache.http.impl.conn.DefaultHttpClientConnectionOperator - Connecting to mylqa.corp.realpage.com/10.34.208.35:443
11:01:46.113 [main] DEBUG org.apache.http.conn.ssl.LenientSslConnectionSocketFactory - Connecting socket to mylqa.corp.realpage.com/10.34.208.35:443 with timeout 30000
11:01:46.117 [main] DEBUG org.apache.http.conn.ssl.LenientSslConnectionSocketFactory - Enabled protocols: [TLSv1, TLSv1.1, TLSv1.2]
11:01:46.120 [main] DEBUG org.apache.http.conn.ssl.LenientSslConnectionSocketFactory - Enabled cipher suites:[TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_DSS_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_DSS_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
11:01:46.120 [main] DEBUG org.apache.http.conn.ssl.LenientSslConnectionSocketFactory - Starting handshake
11:01:46.126 [main] DEBUG org.apache.http.impl.conn.DefaultManagedHttpClientConnection - http-outgoing-3: Shutdown connection
11:01:46.127 [main] DEBUG org.apache.http.impl.execchain.MainClientExec - Connection discarded
11:01:46.127 [main] DEBUG org.apache.http.impl.conn.PoolingHttpClientConnectionManager - Connection released: [id: 3][route: {s}-> [private url][total kept alive: 0; route allocated: 0 of 5; total allocated: 0 of 10]
11:01:46.127 [main] ERROR com.intuit.karate - java.net.SocketException: Connection reset, http call failed after 194 milliseconds for URL: private url
11:01:46.127 [main] ERROR com.intuit.karate - http request failed:
java.net.SocketException: Connection reset
I havent faced this issue with other tools in my mac. Jmeter which uses apache client 4.5.5 didnt have an issue getting the response back
Regards,
JK
P.S.
im kinda new to ssl and https. so please go easy on me. Also ive made sure that both dev server and qa server have the exact same configuration.
You are sure that both are HTTPS right ? It sounds very much like the QA server has stronger encryption in place. Have a look at this ticket and I hope that gets you on your way !
https://github.com/intuit/karate/issues/243
EDIT - extra info:
Someone else had a similar question, but sadly no answer yet: Link
Similar issue turned out to be missing Accept header: Link
Can you try the new custom certificate support: https://github.com/intuit/karate#x509-certificate-authentication
Related question on Stack Overflow: SSLHandshakeException for a simple GET request in Karate Framework

RabbitMQ change listeners.tcp.default port is not changed how is expected

Through Homebrew I have installed RabbitMQ.
It starts with ./rabbitmq-server without any problem:
## ##
## ## RabbitMQ 3.7.6. Copyright (C) 2007-2018 Pivotal Software, Inc.
########## Licensed under the MPL. See http://www.rabbitmq.com/
###### ##
########## Logs: /usr/local/var/log/rabbitmq/rabbit#localhost.log
/usr/local/var/log/rabbitmq/rabbit#localhost_upgrade.log
Starting broker...
completed with 6 plugins.
I have read the following:
RabbitMQ Configuration
rabbitmq.conf.example
rabbitmq_conf_homebrew
Thus in the /usr/local/etc/rabbitmq path exists:
enabled_plugins
rabbitmq-env.conf
rabbitmq.conf (created manually)
The content of these files are:
enabled_plugins
[rabbitmq_management,rabbitmq_stomp,rabbitmq_amqp1_0,rabbitmq_mqtt].
rabbitmq-env.conf
CONFIG_FILE=/usr/local/etc/rabbitmq/rabbitmq
NODE_IP_ADDRESS=127.0.0.1
NODENAME=rabbit#localhost
rabbitmq.conf
# listeners.tcp.default = 5672
listeners.tcp.default = 5662
#listeners.tcp.local = 127.0.0.1:5662 <-- Alpha
#listeners.tcp.local_v6 = ::1:5662 <-- Beta
# mqtt.listeners.tcp.default = 1883
mqtt.listeners.tcp.default = 1873
# stomp.listeners.tcp.default = 61613
stomp.listeners.tcp.default = 61603
The purpose with the ports are decrease them by -10 each one. It only works for mqtt and stomp. The listeners.tcp.default value is ignored, it remains how 5672 and not with 5662 how is expected. I can confirm this showing the /usr/local/var/log/rabbitmq/rabbit#localhost.log content, as follows:
...
2018-07-29 12:46:31.461 [info] <0.321.0> Starting message stores for vhost '/'
2018-07-29 12:46:31.461 [info] <0.325.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_transient": using rabbit_msg_store_ets_index to provide index
2018-07-29 12:46:31.465 [info] <0.321.0> Started message store of type transient for vhost '/'
2018-07-29 12:46:31.465 [info] <0.328.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": using rabbit_msg_store_ets_index to provide index
2018-07-29 12:46:31.490 [info] <0.321.0> Started message store of type persistent for vhost '/'
2018-07-29 12:46:31.495 [info] <0.363.0> started TCP Listener on 127.0.0.1:5672
2018-07-29 12:46:31.495 [info] <0.223.0> Setting up a table for connection tracking on this node: tracked_connection_on_node_rabbit#localhost
2018-07-29 12:46:31.495 [info] <0.223.0> Setting up a table for per-vhost connection counting on this node: tracked_connection_per_vhost_on_node_rabbit#localhost
2018-07-29 12:46:31.496 [info] <0.33.0> Application rabbit started on node rabbit#localhost
2018-07-29 12:46:31.496 [info] <0.369.0> rabbit_stomp: default user 'guest' enabled
2018-07-29 12:46:31.497 [info] <0.385.0> started STOMP TCP Listener on [::]:61603
2018-07-29 12:46:31.497 [info] <0.33.0> Application rabbitmq_stomp started on node rabbit#localhost
2018-07-29 12:46:31.497 [info] <0.33.0> Application cowboy started on node rabbit#localhost
2018-07-29 12:46:31.498 [info] <0.33.0> Application rabbitmq_web_dispatch started on node rabbit#localhost
2018-07-29 12:46:31.572 [info] <0.33.0> Application rabbitmq_management_agent started on node rabbit#localhost
2018-07-29 12:46:31.600 [info] <0.438.0> Management plugin started. Port: 15672
2018-07-29 12:46:31.600 [info] <0.544.0> Statistics database started.
2018-07-29 12:46:31.601 [info] <0.33.0> Application rabbitmq_management started on node rabbit#localhost
2018-07-29 12:46:31.601 [info] <0.33.0> Application rabbitmq_amqp1_0 started on node rabbit#localhost
2018-07-29 12:46:31.601 [info] <0.557.0> MQTT retained message store: rabbit_mqtt_retained_msg_store_dets
2018-07-29 12:46:31.621 [info] <0.575.0> started MQTT TCP Listener on [::]:1873
2018-07-29 12:46:31.622 [info] <0.33.0> Application rabbitmq_mqtt started on node rabbit#localhost
2018-07-29 12:46:31.622 [notice] <0.94.0> Changed loghwm of /usr/local/var/log/rabbitmq/rabbit#localhost.log to 50
2018-07-29 12:46:31.882 [info] <0.5.0> Server startup complete; 6 plugins started.
* rabbitmq_mqtt
* rabbitmq_amqp1_0
* rabbitmq_management
* rabbitmq_management_agent
* rabbitmq_web_dispatch
* rabbitmq_stomp
Thus from above:
started TCP Listener on 127.0.0.1:5672 should be 5662
started STOMP TCP Listener on [::]:61603 changed how is expected
Management plugin started. Port: 15672 is not necessary change it
started MQTT TCP Listener on [::]:1873changed how is expected
I have the same behaviour if I enable Alpha and Beta.
The server is stopped with ./rabbitmqctl stop and started again with ./rabbitmq-server
What is missing or wrong?

karate Connection reset error when connecting to internal qa server. (dev server is fine)

so my company has implemented Oauth2.0 on two different internal servers. when i try using karate to get the token back on the myldev server. i get it back with any issues. (with configure ssl = True)
But when i do the exact same call against the mylqa server. i get the following error
11:01:46.113 [main] DEBUG org.apache.http.impl.execchain.MainClientExec - Opening connection {s}-> private url
11:01:46.113 [main] DEBUG org.apache.http.impl.conn.DefaultHttpClientConnectionOperator - Connecting to mylqa.corp.realpage.com/10.34.208.35:443
11:01:46.113 [main] DEBUG org.apache.http.conn.ssl.LenientSslConnectionSocketFactory - Connecting socket to mylqa.corp.realpage.com/10.34.208.35:443 with timeout 30000
11:01:46.117 [main] DEBUG org.apache.http.conn.ssl.LenientSslConnectionSocketFactory - Enabled protocols: [TLSv1, TLSv1.1, TLSv1.2]
11:01:46.120 [main] DEBUG org.apache.http.conn.ssl.LenientSslConnectionSocketFactory - Enabled cipher suites:[TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_DSS_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_DSS_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
11:01:46.120 [main] DEBUG org.apache.http.conn.ssl.LenientSslConnectionSocketFactory - Starting handshake
11:01:46.126 [main] DEBUG org.apache.http.impl.conn.DefaultManagedHttpClientConnection - http-outgoing-3: Shutdown connection
11:01:46.127 [main] DEBUG org.apache.http.impl.execchain.MainClientExec - Connection discarded
11:01:46.127 [main] DEBUG org.apache.http.impl.conn.PoolingHttpClientConnectionManager - Connection released: [id: 3][route: {s}-> [private url][total kept alive: 0; route allocated: 0 of 5; total allocated: 0 of 10]
11:01:46.127 [main] ERROR com.intuit.karate - java.net.SocketException: Connection reset, http call failed after 194 milliseconds for URL: private url
11:01:46.127 [main] ERROR com.intuit.karate - http request failed:
java.net.SocketException: Connection reset
I havent faced this issue with other tools in my mac. Jmeter which uses apache client 4.5.5 didnt have an issue getting the response back
Regards,
JK
P.S.
im kinda new to ssl and https. so please go easy on me. Also ive made sure that both dev server and qa server have the exact same configuration.
You are sure that both are HTTPS right ? It sounds very much like the QA server has stronger encryption in place. Have a look at this ticket and I hope that gets you on your way !
https://github.com/intuit/karate/issues/243
EDIT - extra info:
Someone else had a similar question, but sadly no answer yet: Link
Similar issue turned out to be missing Accept header: Link
Can you try the new custom certificate support: https://github.com/intuit/karate#x509-certificate-authentication
Related question on Stack Overflow: SSLHandshakeException for a simple GET request in Karate Framework

SCDF yarn connection error

I have deployed spring cloud dataflow on a virtual yarn cluster. starting the server ./bin/dataflow-server-yarn executes correctly. and returns
2016-11-02 10:31:59.786 INFO 42493 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2016-11-02 10:31:59.787 INFO 42493 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'spring-cloud-dataflow-server-yarn:9393.errorChannel' has 1 subscriber(s).
2016-11-02 10:31:59.787 INFO 42493 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started _org.springframework.integration.errorLogger
2016-11-02 10:31:59.896 INFO 42493 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 9393 (http)
2016-11-02 10:31:59.901 INFO 42493 --- [ main] o.s.c.d.server.yarn.YarnDataFlowServer : Started YarnDataFlowServer in 16.026 seconds (JVM running for 16.485)
I can then start ./bin/dataflow-shell , from here i can import apps create and list streams without errors; however, if i try to deploy the created stream the following connection error happens
2016-11-02 10:52:58.275 INFO 42788 --- [nio-9393-exec-8] o.s.c.deployer.spi.yarn.YarnAppDeployer : Deploy request for org.springframework.cloud.deployer.spi.core.AppDeploymentRequest#23d59aea
2016-11-02 10:52:58.275 INFO 42788 --- [nio-9393-exec-8] o.s.c.deployer.spi.yarn.YarnAppDeployer : Deploying request for definition [AppDefinition#3350878c name = 'time', properties = map['spring.cloud.stream.bindings.output.producer.requiredGroups' -> 'ticktock', 'spring.cloud.stream.bindings.output.destination' -> 'ticktock.time']]
2016-11-02 10:52:58.275 INFO 42788 --- [nio-9393-exec-8] o.s.c.deployer.spi.yarn.YarnAppDeployer : Parameters for definition {spring.cloud.stream.bindings.output.producer.requiredGroups=ticktock, spring.cloud.stream.bindings.output.destination=ticktock.time}
2016-11-02 10:52:58.275 INFO 42788 --- [nio-9393-exec-8] o.s.c.deployer.spi.yarn.YarnAppDeployer : Deployment properties for request {spring.cloud.deployer.group=ticktock}
2016-11-02 10:52:58.276 INFO 42788 --- [rTaskExecutor-1] o.s.s.support.LifecycleObjectSupport : started org.springframework.statemachine.support.DefaultStateMachineExecutor#6d68eeb7
2016-11-02 10:52:58.276 INFO 42788 --- [rTaskExecutor-1] o.s.s.support.LifecycleObjectSupport : started RESOLVEINSTANCE WAITINSTANCE STARTCLUSTER CREATECLUSTER PUSHARTIFACT STARTINSTANCE CHECKINSTANCE PUSHAPP CHECKAPP WAITCHOICE STARTINSTANCECHOICE PUSHAPPCHOICE / / uuid=d7e5224f-c5f0-47c9-b2c2-066b117cc786 / id=null
2016-11-02 10:52:58.276 INFO 42788 --- [rTaskExecutor-1] o.s.c.d.s.y.DefaultYarnCloudAppService : Cachekey STREAMnull found YarnCloudAppServiceApplication org.springframework.cloud.deployer.spi.yarn.YarnCloudAppServiceApplication#79158163
2016-11-02 10:52:58.280 INFO 42788 --- [rTaskExecutor-1] o.s.c.d.s.y.DefaultYarnCloudAppService : Cachekey STREAMnull found YarnCloudAppServiceApplication org.springframework.cloud.deployer.spi.yarn.YarnCloudAppServiceApplication#79158163
2016-11-02 10:52:59.281 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:00.282 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:01.283 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:02.283 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:03.284 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:04.285 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:05.286 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:06.287 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:07.288 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:08.289 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:08.290 INFO 42788 --- [rTaskExecutor-1] o.s.c.d.s.y.DefaultYarnCloudAppService : Cachekey STREAMapp--spring.yarn.appName=scdstream:app:ticktock,--spring.yarn.client.launchcontext.arguments.--spring.cloud.deployer.yarn.appmaster.artifact=/dataflow//artifacts/cache/ found YarnCloudAppServiceApplication org.springframework.cloud.deployer.spi.yarn.YarnCloudAppServiceApplication#7d3bc3f0
2016-11-02 10:53:09.294 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:10.295 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:11.296 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:12.297 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:13.297 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:14.298 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:15.299 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:16.300 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:17.301 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:18.302 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:18.361 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.fs.TrashPolicyDefault : Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
2016-11-02 10:53:18.747 INFO 42788 --- [rTaskExecutor-1] o.s.s.support.LifecycleObjectSupport : stopped org.springframework.statemachine.support.DefaultStateMachineExecutor#6d68eeb7
2016-11-02 10:53:18.747 INFO 42788 --- [rTaskExecutor-1] o.s.s.support.LifecycleObjectSupport : stopped RESOLVEINSTANCE WAITINSTANCE STARTCLUSTER CREATECLUSTER PUSHARTIFACT STARTINSTANCE CHECKINSTANCE PUSHAPP CHECKAPP WAITCHOICE STARTINSTANCECHOICE PUSHAPPCHOICE / / uuid=d7e5224f-c5f0-47c9-b2c2-066b117cc786 / id=null
2016-11-02 10:53:18.747 ERROR 42788 --- [rTaskExecutor-1] o.s.c.d.s.y.AbstractDeployerStateMachine : Passing through error state DefaultStateContext [stage=STATE_ENTRY, message=GenericMessage [payload=DEPLOY, headers={artifact=org.springframework.cloud.stream.app:time-source-kafka:jar:1.0.0.BUILD-SNAPSHOT, appVersion=app, groupId=ticktock, definitionParameters={spring.cloud.stream.bindings.output.producer.requiredGroups=ticktock, spring.cloud.stream.bindings.output.destination=ticktock.time}, count=1, clusterId=ticktock:time, id=f0f7b54e-e59d-69c0-a405-f0907fa46343, contextRunArgs=[--spring.yarn.appName=scdstream:app:ticktock, --spring.yarn.client.launchcontext.arguments.--spring.cloud.deployer.yarn.appmaster.artifact=/dataflow//artifacts/cache/], artifactDir=/dataflow//artifacts/cache/, timestamp=1478083978275}], messageHeaders={artifact=org.springframework.cloud.stream.app:time-source-kafka:jar:1.0.0.BUILD-SNAPSHOT, appVersion=app, groupId=ticktock, definitionParameters={spring.cloud.stream.bindings.output.producer.requiredGroups=ticktock, spring.cloud.stream.bindings.output.destination=ticktock.time}, count=1, clusterId=ticktock:time, id=f0f7b54e-e59d-69c0-a405-f0907fa46343, contextRunArgs=[--spring.yarn.appName=scdstream:app:ticktock, --spring.yarn.client.launchcontext.arguments.--spring.cloud.deployer.yarn.appmaster.artifact=/dataflow//artifacts/cache/], artifactDir=/dataflow//artifacts/cache/, timestamp=1478083978275}, extendedState=DefaultExtendedState [variables={artifact=org.springframework.cloud.stream.app:time-source-kafka:jar:1.0.0.BUILD-SNAPSHOT, appVersion=app, appname=scdstream:app:ticktock, definitionParameters={spring.cloud.stream.bindings.output.producer.requiredGroups=ticktock, spring.cloud.stream.bindings.output.destination=ticktock.time}, count=1, messageId=f0f7b54e-e59d-69c0-a405-f0907fa46343, clusterId=ticktock:time, error=org.springframework.yarn.YarnSystemException: Call From master/127.0.1.1 to localhost:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused; nested exception is java.net.ConnectException: Call From master/127.0.1.1 to localhost:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused}], transition=org.springframework.statemachine.transition.DefaultExternalTransition#77c2bbfc, stateMachine=UNDEPLOYMODULE DESTROYCLUSTER STOPCLUSTER DEPLOYMODULE RESOLVEINSTANCE WAITINSTANCE STARTCLUSTER CREATECLUSTER PUSHARTIFACT STARTINSTANCE CHECKINSTANCE PUSHAPP CHECKAPP WAITCHOICE STARTINSTANCECHOICE PUSHAPPCHOICE ERROR READY ERROR_JUNCTION UNDEPLOYEXIT DEPLOYEXIT / ERROR / uuid=436b73fe-991a-4c22-a418-334f895d41e5 / id=null, source=null, target=null, sources=null, targets=null, exception=null]
2016-11-02 10:53:18.748 WARN 42788 --- [nio-9393-exec-8] o.s.c.d.s.c.StreamDeploymentController : Exception when deploying the app StreamAppDefinition [streamName=ticktock, name=time, registeredAppName=time, properties={spring.cloud.stream.bindings.output.producer.requiredGroups=ticktock, spring.cloud.stream.bindings.output.destination=ticktock.time}]: java.util.concurrent.ExecutionException: org.springframework.yarn.YarnSystemException: Call From master/127.0.1.1 to localhost:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused; nested exception is java.net.ConnectException: Call From master/127.0.1.1 to localhost:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
changing the IP address to the localhost,yielded same results.
here is my server.yml
logging:
level:
org.apache.hadoop: INFO
org.springframework.yarn: INFO
maven:
remoteRepositories:
springRepo:
url: https://repo.spring.io/libs-snapshot
spring:
main:
show_banner: false
# Configured for Hadoop single-node running on localhost. Replace with property values reflecting your
# actual Hadoop cluster when running in a distributed environment.
hadoop:
fsUri: hdfs://192.168.137.135:8020
resourceManagerHost: 192.168.137.135
resourceManagerPort: 8032
resourceManagerSchedulerAddress: 192.168.137.135:8030
# Configured for Redis running on localhost. Replace at least host property when running in a
# distributed environment.
redis:
port: 6379
host: 192.168.137.135
# Configured for an embedded in-memory H2 database. Replace the datasource configuration with properties
# matching your preferred database to be used instead, if needed, or when running in a distributed environment.
#rabbitmq:
# addresses: localhost:5672
# for default embedded database
datasource:
url: jdbc:h2:tcp://localhost:19092/mem:dataflow
username: sa
password:
driverClassName: org.h2.Driver
# # for mysql/mariadb datasource
# datasource:
# url: jdbc:mysql://localhost:3306/yourDB
# username: yourUsername
# password: yourPassword
# driverClassName: org.mariadb.jdbc.Driver
# # for postgresql datasource
# datasource:
# url: jdbc:postgresql://localhost:5432/yourDB
# username: yourUsername
# password: yourPassword
# driverClassName: org.postgresql.Driver
cloud:
stream:
kafka:
binder:
brokers: localhost:9093
zkNodes: localhost:2181
config:
enabled: false
server:
bootstrap: true
deployer:
yarn:
app:
baseDir: /dataflow
streamappmaster:
memory: 512m
virtualCores: 1
javaOpts: "-Xms512m -Xmx512m"
streamcontainer:
priority: 5
memory: 256m
virtualCores: 1
javaOpts: "-Xms64m -Xmx256m"
taskappmaster:
memory: 512m
virtualCores: 1
javaOpts: "-Xms512m -Xmx512m"
taskcontainer:
priority: 10
memory: 256m
virtualCores: 1
javaOpts: "-Xms64m -Xmx256m"
# yarn:
# hostdiscovery:
# pointToPoint: false
# loopback: false
# preferInterface: ['eth', 'en']
# matchIpv4: 192.168.0.0/24
# matchInterface: eth\\d*
I found a workaround, it seems the port was the problem. I changed the port number on the yarn-site.xml and did the same on the servers.xml and voila!!