Frequent disconnects with XHR-Polling, Socket.io and Titanium - xmlhttprequest

I'm using Socket.io on the Titanium framework via https://github.com/nowelium/socket.io-titanium. So far, that "port" only supports the XHR-polling transport. In the past, I've had success with Socket.io using the Websocket transport.
The problem I'm having now is that my socket connection seems to "drop" every 10 seconds for a few seconds at a time. This means chat messages are dropped, etc. Is this expected behavior with XHR-polling - do I need a implement a queue system - or is there some way I can look to fix this issue?
debug - setting poll timeoutdebug - discarding transport
debug - cleared close timeout for client 407473253144647189
debug - clearing poll timeout
info - transport end
debug - set close timeout for client 407473253144647189
debug - cleared close timeout for client 407473253144647189
debug - discarding transport
debug - client authorized
info - handshake authorized 4149191422068834219
debug - setting request GET /socket.io/1/xhr-polling/4149191422068834219?t=Thu%20Jan%2012%202012%2022%3A37%3A47%20GMT-0800%20%28PST%29
debug - setting poll timeout
debug - client authorized for
debug - clearing poll timeout
debug - xhr-polling writing 1::
debug - set close timeout for client 4149191422068834219
Connection
debug - setting request GET /socket.io/1/xhr-polling/4149191422068834219?t=Thu%20Jan%2012%202012%2022%3A37%3A47%20GMT-0800%20%28PST%29
debug - setting poll timeout
debug - discarding transport
debug - cleared close timeout for client 4149191422068834219
Last login: Fri Jan 13 00:04:14 on ttys003

Why don't you load socket.io.js in a web view and pump events via Ti.App.fireEvent / addEventListener? That gives you WebSockets, which doesn't have the limitations of polling.
<html>
<head>
<script src="http://63.10.10.123:1337/socket.io/socket.io.js"></script>
<script>
var socket = io.connect('http://63.10.10.123:1337');
socket.on('onSomething', function (data) {
Ti.App.fireEvent('onSomething', data);
});
Ti.App.addEventListener('emitSomething', function (data) {
socket.emit('emitSomething', data);
});
</script>
</head>
<body>
</body>
</html>
EDIT: I want to note that I did this in a project, and it was crashing my app very consistently on iOS. I looked around, and other devs were hitting this too even without using Titanium. I wouldn't recommend taking this approach, or at the very least, testing it very thoroughly (especially backgrounding and resuming the app). Instead, I use Appcelerator's TCP sockets with my own light protocol to stream data from client to server and server to client.

Related

Apache Curator connection state listener not always called with RECONNECTED state change

I am using Apache Curator v4.3.0 (ZK v3.5.8), and I noticed that in some disconnect/reconnect scenarios, I stop getting a RECONNECTED event to the registered listener/s.
CuratorFramework client = ...;
// retry policy is RetryUntilElapsed with Integer.MAX_VALUE
// sessionTimeout is 15 sec
// connectionTimeout is 5 sec
client.getConnectionStateListenable().addListener(new ConnectionStateListener()...
Although I do see that the ConnectionStateManager prints the state change:
[org.apache.zookeeper.ClientCnxn] - Client session timed out, have not heard from server in 15013ms for sessionid 0x10000037e340012, closing socket connection and attempting reconnect
[org.apache.zookeeper.ClientCnxn] - Opening socket connection to server
...
[org.apache.curator.ConnectionState] - Session expired event received
[org.apache.zookeeper.ClientCnxn] - Session establishment complete on server
[org.apache.curator.framework.state.ConnectionStateManager] - State change: RECONNECTED
Usually right after I see my listener called on stateChanged, but not always.
The CuratorFramework client is shared between multiple components registering different listeners. I didn't see any restriction to have only one client per listener. But, when I don't share it, the problem doesn't occur anymore.
Any suggestions on how to proceed debugging this problem?
Thank you,
Meron
This appears to be the bug that was fixed in Curator 5.0.0 - https://issues.apache.org/jira/browse/CURATOR-525 - if you can please test with 5.0.0 and see if it fixes the issue.

Symfony Messenger: retry delay not working with Redis transport

I have a Symfony 4 application using the Symfony Messenger component (version 4.3.2) to dispatch messages.
For asynchronous message handling some Redis transports are configured and they work fine. But then I decided that one of them should retry a few times when message handling fails. I configured a retry strategy and the transport actually started retrying on failure, but it seems to ignore the delay configuration (keys delay, multiplier, max_delay) and all the retry attempts are always made without any delay, all within one second or a similarly short timespan, which is really undesirable in this use case.
My Messenger configuration (config/packages/messenger.yaml) looks like this
framework:
messenger:
default_bus: messenger.bus.default
transports:
transport_without_retry:
dsn: '%env(REDIS_DSN)%/without_retry'
retry_strategy:
max_retries: 0
transport_with_retry:
dsn: '%env(REDIS_DSN)%/with_retry'
retry_strategy:
max_retries: 5
delay: 10000 # 10 seconds
multiplier: 3
max_delay: 3600000
routing:
'App\Message\RetryWorthMessage': transport_with_retry
I tried replacing Redis with Doctrine (as implementation of the retrying transport) and voila - the delays started to work as expected. I therefore suspect that the Redis transport imlementation doesn't support delayed retry. But I read the docs carefully, searched related Github issues, and still didn't find a definite answer.
So my question is: does Redis transport support delayed retry? If it does, how do I make it work?
It turned out that Redis transport supports delayed retry, but only since Messenger version 4.4.

Websphere application server LDAP connection pool

We are using websphere application server 8.5.0.0. we have a requirement where we have to query a LDAP server to get the customer details. I tried to configure the connection pool as described here and here.
I passed the below JVM arguments
-Dcom.sun.jndi.ldap.connect.pool.maxsize=5
-Dcom.sun.jndi.ldap.connect.pool.timeout=60000
-Dcom.sun.jndi.ldap.connect.pool.debug=all
Below is a sample code snippet
Hashtable<String,String> env = new Hashtable<String,String>();
...
...
env.put("com.sun.jndi.ldap.connect.pool", "true");
env.put("com.sun.jndi.ldap.connect.timeout", "5000");
InitialDirContext c = new InitialDirContext(env);
...
...
c.close();
I have two issues here
When I am calling the service for the 6th time, I am getting javax.naming.ConnectionException: Timeout exceeded while waiting for a connection: 5000ms. I checked the connection pool debug logs and I noticed the connections are not returning back to the pool immediately despite closing the context safely in a finally block. The connections are released after some time and expired after sometime after the release. There after if I call the service again, it connects to the LDAP server but new connections are being created.
I tried to execute the code and I am able to see the connection pool debug logs. But the logs are being logged in System.Err log. Is this an issue? Can I ignore it?
But when I run the code as a standalone application(multithreaded with loop of 50 times), the connections are returned/released immediately.
Can anyone please let me know what am I doing wrong?

Attach stdin of docker container via websocket

I am using the chrome websocket client extension to attach to a running container calling the Docker remote API like this:
ws://localhost:2375/containers/34968f0c952b/attach/ws?stream=1&stdout=1
The container is started locally from my machine executing a jar in the image that waits for user input. Basically I want to supply this input from an input field in the web browser.
Although I am able to attach using the API endpoint, I am encountering a few issues - probably due to my lackluster understanding of the ws endpoint as well as the bad documentation - that I would like to resolve:
1) When sending data using the chrome websocket client extension, the frame appears to be transmitted over the websocket according to the network inspection tool. However, the process running in the container waiting for input only receives the sent data when the websocket connection is closed - all at once. Is this standard behaviour? Intuitively you would expect that the input is immediately sent to the process.
2) If I attach to stdin and stdout at the same time, the docker deamon gets stuck waiting for stdin to attach, resulting in not being able to see any output:
[debug] attach.go:22 attach: stdin: begin
[debug] attach.go:59 attach: stdout: begin
[debug] attach.go:143 attach: waiting for job 1/2
[debug] server.go:2312 Closing buffered stdin pipe
[error] server.go:844 Error attaching websocket: use of closed network connection
I have solved this opening two separate connections for stdin and stdout, which works, but is really annoying. Any ideas on this one?
Thanks in advance!

heartBeatIntervalInSecs does not have effect if the application starts with the server stopped

WL 6.1
I have an application with:
ConnectOnStartup: true
heartBeatIntervalInSecs: 30
If the server is started and I start the application I can see in the application log a trace each 30sec for the heartbeat
But if the server is stopped and I start the application there is no trace for the heartbeat.
I handle the connection error with the onConnectionFailure and I let the application to start.
Is this ok? How could I enable the heartbeat manually?
I have tested this on Android.
Thank you.
There is API for this: WL.Client.setHeartBeatInterval(interval)
Accepts:
-1 to disable
Any other number (in seconds)
In your implementation simply either disable or enable (by setting an interval) whenever required.