I'm using singalr library to develop a real time notification web site using MVC 4.
My web app will run on several web servers so I need to manage connections using a db.
Every thing goes Ok exept that the OnDisconnected method is not fireing in all web browser.
It seems to work fine with firefox but using IE9 and all mobile browsers it nevers fires.
So here's my question, I don't won't to rely on this method and end up with lots of unused connections in my db. Besides, even if the Ondisconnected method will work find there is a chance that the server will go down and these unused connection will remain in the db.
I was thinking of a background method that will run every 1 minute let's say and compare the db's connections against the current connections.
The problem is that I don't know how to implement that or is it the best way to do so.
Is there a way to get all valid connection so I can compare with the db?
Thanks in advance
The OnDisconnected method should ALWAYS fire for every browser. However, when the OnDisconnected method fires may vary.
Here is the process that SignalR goes through when triggering the OnDisconnected method:
SignalR binds to the unload event of the browser and attempts to send an AJAX request to the server to notify it that the client is going away (disconnecting). IF that AJAX request successfully reaches the server the OnDisconnected method will be triggered immediately. IF that AJAX request fails to reach the server due to network conditions or other unseen conditions then the OnDisconnected will not fire UNTIL the ConnectionTimeout (configuration) has elapsed.
Soooo long story short, OnDisconnected should always eventually fire for every client and every browser. If it does not you should absolutely file an issue on GitHub.
Hope this helps!
Related
I've created an SNMP listener application for one of our servers that runs as a service and passively listens for any SNMP message alerts sent from another server, and when one is received is sends out a page/email to appropriate staff. I followeda few online tutorials for setting up the application as a windows service since it needs to run constantly and won't require input/interaction from a user, or interaction with any GUI/desktop applications.
For some reason, when I install the application as a service, it installs correctly, but doesn't actually seem to be working. When SNMP messages are sent to the server nothing happens. However, in my app.publish folder there's an SNMPTrapper.exe application,and if I run that exe on its own, then everything works fine. For the time being I'm using a workaround so that the Onstart section of the code for the service basically just launches the SNMPTrapper.exe application, and when the service is stopped, it finds and kills the SNMPTrapper.exe process. At this point though, the service itself doesn't seem to be working/doing anything. It's essentially just a way to get the SNMPTrapper.exe application launched.
Does anyone know what the issue may be? In some of the tutorials I've read through they outline how to setup polling intervals for the service, but I don't think that would be applicable since this service will essentially just run constantly to listen for new messages, it won't need to check for anything at a regular interval.
Right now pretty much all of my code is executed in Sub Main() except for a few function calls.
Any help would be greatly appreciated.
You don’t state how you’re doing any of this. For a windows service you get two messages from the system: OnStart and OnStop. The job of OnStart is to set up all the required code to do the job, then exit. It doesn’t take part in the work so you need a Task or Thread setting up to do that. The Task or Thread should loop until it gets a message, passed by OnStop, that we’re done. If you want a service that you can test from the command line then your Main routine needs to do exactly the same setup, then wait for a key to be pressed before sending an OnStop.
(As an aside, you ARE remembering to start the service once you have installed it?)
We're facing an issue with handling unexpected behaviours when performing xmlHttpRequests on Android devices using React-Native. We've experienced behaviour where the app becomes unavailable to complete API calls, even though the device is connected to the internet perfectly well (browser can access non-cached sites just fine). The only way to resolve this issue for our users has been to completely restart the app.
To understand the issue and its severity, we wrapped all our API calls to a timer function in production and sent reports to Sentry whenever a request took longer than 30 seconds to finish. Now we've been receiving these kind of reports in the hundreds per day with the duration sometimes being in the hours or even days.
First, instead of using whatwg-fetch, we moved to using axios so that we can manually set the timeout of each request, but this ended up not helping at all.
Second, we dove deeper into understanding how React-Native actually implements timing out XHR requests on Android, and found that it uses OkHttp3 under the hood. OkHttp has a default value for connect, read and write timeouts and react-native allows developers to change the value of connect timeout here. However, OkHttp also has a method for setting a call timeout (everything from connect to reading the response body), but this has a default value of 0 (no timeout) and React-Native doesn't allow users to change that. More on okhttp timeouts here
My question is whether this can be the cause of our worries and whether it should be reported to React-Native as a bug. Lets assume the following case:
app makes an API call to a resource
okhttp is able to connect to the resource within specified timeout limit (default 10s)
okhttp is able to write the request body to the server within timeout limit (10s)
server processes request but for some reason it fails to start sending a response to the client. I guess there could be many reasons for this like server being disconnected, server crashing or server simply losing the connection to the client without the client noticing it. As there is no timeout here, okhttp will happily wait for days on end for the server to start responding to the request.
Am I missing something here or misunderstanding how timeouts work in okhttp? Is there another, perhaps better solution than implementing the ability for developers to set callTimeout on API calls performed on android? And also, isn't it a bit stupid that developers cant set their own write and read timeouts? Cant this lead to unexpected behaviour when you want to send large amounts of data in either direction on a slow connection? 10s is quite long, but perhaps not long enough for all use-cases.
Thanks for your help in advance!
I am using libnice on a C++ native server which is trying to make a WebRTC peer connection to a web browser client app. Using libnice, the candidate gathering is successful and the Offer/Answer exchange is successful. It then proceeds with the checking stage which is also successful. I can see in Wireshark that the STUN request/response exchanges are also successful.
However, the candidate checking keeps going on and on, constantly sending/receiving the successful STUN requests/responses.
It is not obvious to me, and the example code does not show, how to actually stop the candidate checks when they have succeeded. I have called the API routine: nice_agent_attach_recv() and registered the callback but it does not seem to fire. And even if it did, the callback signature does not give me any clue as to how to process any of the data.
Question: what processing should be done in the ice_agent_attach_recv() callback?
Thanks,
-Andres
No processing in the callback should need to be done. You would need to call nice_agent_get_local_candidates() and then continue with credentials and so on. There is a decent example here in the reference manual.
A bit unsure where to look for this one...
Context:
HTML5 web page, that uses HTML5 EventSource / server-side events to get refresh notifications
OpenWrt BarrierBreaker server, running uHTTPd as the web server
a two-level CGI script that provides the server-side events:
the CGI is a shell script (ash, not bash), that parses QUERY_STRING, and calls...
a C application that do the true data extraction (from an SQLite database) and pushes the data to the web page
Everything works, except for a little detail: when the web page is closed,
the C application keeps running. Since it doesn't expect any user input, its current structure is a simple while(1). So after some time, the openwrt box has dozens of copies of the app running.
So the question: how can the application be changed to detect that the client isn't there anymore, and that it should quits?
Thanks
[Edit]
Since posting this a few hours ago, i investigated if the information was somehow available in the script's input stream. It appears it isn't.
I also found http://html5doctor.com/server-sent-events/ that describes a strategy to do exactly this in a Node.js environment, but I have no idea how to translate this in a script-based one.
[/Edit]
I have impatient users who update a piece of data on a web page. The update triggers an asynchronous XMLHttpRequest and the response causes the page to update.
My question is this. If the user closes the browser window before the request completes, will the browser send an instruction to the web server to stop the request?
FWIW the users are using Firefox or Safari.
The browser should close the connection, which should result in the server getting a close/fin packet. At that point it is up to the server to decide whether it finishes processing or if it stops processing at that point.
I checked to see if I could find what Apache does, but didn't find any documentation on it.