I am testing websocket and trying to find scenarios to trigger an error event on the client side. The protocol says
If the user agent was required to fail the websocket connection or the WebSocket connection is closed with prejudice, fire a simple event named error at the WebSocket object.
However using this logic, I tried connecting to a server which does not support websocket. I see that browser is actually firing the "close" event but "error" event is not getting triggered.
Q : Should the above mentioned scenario fire an error event ? Also, what are the other scenarios with which I can possible trigger an error event on the client side ?
WebSockets specifications are not fully followed by all browsers.
So each implementation has own differences. For example Chrome wont throw onerror event on connection issues.
But FireFox does throw onerror whenever connection is broke by endpoint (disconnected from code, just turned off server, or even inability to connect).
So you can't really for now count on stability of this event implementations.
Based on my experience, I see for now that onerror event is pretty specific to browser, and has just a few logical scenarios in common.
Related
We have our selenium server (java) which is opening an electron app to test. During our tests, it seems to just lose the program/process control when we make any non XHR request i.e. any TCP request (via tools like serialport) or if we do a DNS lookup for an app. When the execution of such code happens, things just stop working (the code is async and never comes into the try catch block that is enclosing it) and the test fails.
Has anyone seen this type of issue before?
We're facing an issue with handling unexpected behaviours when performing xmlHttpRequests on Android devices using React-Native. We've experienced behaviour where the app becomes unavailable to complete API calls, even though the device is connected to the internet perfectly well (browser can access non-cached sites just fine). The only way to resolve this issue for our users has been to completely restart the app.
To understand the issue and its severity, we wrapped all our API calls to a timer function in production and sent reports to Sentry whenever a request took longer than 30 seconds to finish. Now we've been receiving these kind of reports in the hundreds per day with the duration sometimes being in the hours or even days.
First, instead of using whatwg-fetch, we moved to using axios so that we can manually set the timeout of each request, but this ended up not helping at all.
Second, we dove deeper into understanding how React-Native actually implements timing out XHR requests on Android, and found that it uses OkHttp3 under the hood. OkHttp has a default value for connect, read and write timeouts and react-native allows developers to change the value of connect timeout here. However, OkHttp also has a method for setting a call timeout (everything from connect to reading the response body), but this has a default value of 0 (no timeout) and React-Native doesn't allow users to change that. More on okhttp timeouts here
My question is whether this can be the cause of our worries and whether it should be reported to React-Native as a bug. Lets assume the following case:
app makes an API call to a resource
okhttp is able to connect to the resource within specified timeout limit (default 10s)
okhttp is able to write the request body to the server within timeout limit (10s)
server processes request but for some reason it fails to start sending a response to the client. I guess there could be many reasons for this like server being disconnected, server crashing or server simply losing the connection to the client without the client noticing it. As there is no timeout here, okhttp will happily wait for days on end for the server to start responding to the request.
Am I missing something here or misunderstanding how timeouts work in okhttp? Is there another, perhaps better solution than implementing the ability for developers to set callTimeout on API calls performed on android? And also, isn't it a bit stupid that developers cant set their own write and read timeouts? Cant this lead to unexpected behaviour when you want to send large amounts of data in either direction on a slow connection? 10s is quite long, but perhaps not long enough for all use-cases.
Thanks for your help in advance!
I am using libnice on a C++ native server which is trying to make a WebRTC peer connection to a web browser client app. Using libnice, the candidate gathering is successful and the Offer/Answer exchange is successful. It then proceeds with the checking stage which is also successful. I can see in Wireshark that the STUN request/response exchanges are also successful.
However, the candidate checking keeps going on and on, constantly sending/receiving the successful STUN requests/responses.
It is not obvious to me, and the example code does not show, how to actually stop the candidate checks when they have succeeded. I have called the API routine: nice_agent_attach_recv() and registered the callback but it does not seem to fire. And even if it did, the callback signature does not give me any clue as to how to process any of the data.
Question: what processing should be done in the ice_agent_attach_recv() callback?
Thanks,
-Andres
No processing in the callback should need to be done. You would need to call nice_agent_get_local_candidates() and then continue with credentials and so on. There is a decent example here in the reference manual.
I'm using singalr library to develop a real time notification web site using MVC 4.
My web app will run on several web servers so I need to manage connections using a db.
Every thing goes Ok exept that the OnDisconnected method is not fireing in all web browser.
It seems to work fine with firefox but using IE9 and all mobile browsers it nevers fires.
So here's my question, I don't won't to rely on this method and end up with lots of unused connections in my db. Besides, even if the Ondisconnected method will work find there is a chance that the server will go down and these unused connection will remain in the db.
I was thinking of a background method that will run every 1 minute let's say and compare the db's connections against the current connections.
The problem is that I don't know how to implement that or is it the best way to do so.
Is there a way to get all valid connection so I can compare with the db?
Thanks in advance
The OnDisconnected method should ALWAYS fire for every browser. However, when the OnDisconnected method fires may vary.
Here is the process that SignalR goes through when triggering the OnDisconnected method:
SignalR binds to the unload event of the browser and attempts to send an AJAX request to the server to notify it that the client is going away (disconnecting). IF that AJAX request successfully reaches the server the OnDisconnected method will be triggered immediately. IF that AJAX request fails to reach the server due to network conditions or other unseen conditions then the OnDisconnected will not fire UNTIL the ConnectionTimeout (configuration) has elapsed.
Soooo long story short, OnDisconnected should always eventually fire for every client and every browser. If it does not you should absolutely file an issue on GitHub.
Hope this helps!
I have impatient users who update a piece of data on a web page. The update triggers an asynchronous XMLHttpRequest and the response causes the page to update.
My question is this. If the user closes the browser window before the request completes, will the browser send an instruction to the web server to stop the request?
FWIW the users are using Firefox or Safari.
The browser should close the connection, which should result in the server getting a close/fin packet. At that point it is up to the server to decide whether it finishes processing or if it stops processing at that point.
I checked to see if I could find what Apache does, but didn't find any documentation on it.