one of the our distributed apps are using heart beat to detect the peer's disconnection(e.g. LAN line broken, etc) .
is the heart beating necessary?
Maybe, what will do you if you don't get the heart beat?
If you have no way to recover there is
no point in having a heart beat.
If you are using call-back from the server to the client, you need a way that the client can ask the server to resent all lost call-backs, this is not easy.
Also if you don’t get a heart beat it does not mean a message will not get there later, as there can be all sort of network delays, is it safe to just resent your messages?
The heart beat is the easy bit, the
hard bit is what to do when the heart
does not beat!
Yes. TCP would only show that the physical connection is still alive (ie. the socket was not teared down by routers or by OS). But will tell nothing about the application availability. If the process at the other end of your pipe is in a while(1); loop and is not processing your requests, you aren't really connected to it.
That is quite a good way to know that you are still connected to the other end at the "application level" and applications can still talk. Otherwise you would have to make assumptions that "the other end" has nothing to "say", which would be hard to separate from "the other end actually lost network connectivity 35 seconds ago".
Related
We are using RedisPubSubReactiveCommands and calling the subscribe and observeChannels methods of Lettuce.
In case of a fast publisher and a slow subscriber, how is the back pressure handled?
Since publishers and subscribers are independent in Redis, there's no way the producer can be slowed down. Given this fact, which of the following understandings is correct?
Does the data get dropped at application side (lettuce drops it) depending on the OverflowStrategy taken by observeChannels ?
If this is the scenario, its quite inefficient, since the data is coming all the way from Redis server till the application, creating unnecessary network traffic.
Does Lettuce convey back pressure to TCP layer of client side, then Application doesn’t receive anything, but TCP buffers will be full. Looking at this Github commit, this seems to be the implementation.
But what I don't understand is, what is done with the OverflowStrategy provided?
Does the back pressure get conveyed all the way to the Redis server, so that network traffic is reduced. This is the most efficient solution in my opinion.
I don’t think this is the behaviour of Redis/Lettuce. What could be the reason to not have it this way?
Could any one please help us in forming a correct understanding.
Been trying to find the issue here (Unity devs make it seem it's a design choice somehow), but I've been trying to simulate lag to see how my game works over the internet (I've also tried connecting a client from a tethered hotspot and it disconnected as well) and found that any amount of packet loss (even just 1%) will eventually lead to a disconnection within a couple minutes.
My game receives data just fine, as you can see the remote players moving around just fine until the disconnection. I think it has something to do with UNET's heartbeat packets not being resent for some reason, and has soon as it drops the first one you get disconnected.
If this is a design choice, I can't see how Unity would think you could have a rock solid connection when you obviously have some who could be playing off a cellular connection. Anyone know anything about this? I've tried asking on the Unity forums as well, and there's been no replies for over a week now.
Thanks
I'm working with GameKit.framework and I'm trying to create a reliable communication between two iPhones.
I'm sending packages with the GKMatchSendDataReliable mode.
The documentation says:
GKMatchSendDataReliable
The data is sent continuously until it is successfully received by the intended recipients or the connection times out.
Reliable transmissions are delivered in the order they were sent. Use this when you need to guarantee delivery.
Available in iOS 4.1 and later. Declared in GKMatch.h.
I have experienced some problems on a bad WiFi connection. The GameKit does not declare the connection lost, but some packages never arrive.
Can I count on a 100% reliable communication when using GKMatchSendDataReliable or is Apple just using fancy names for something they didn't implement?
My users also complain that some data may be accidentally lost during the game. I wrote a test app and figured out that GKMatchSendDataReliable is not really reliable. On weak internet connection (e.g. EDGE) some packets are regularly lost without any error from the Game Center API.
So the only option is to add an extra transport layer for truly reliable delivery.
I wrote a simple lib for this purpose: RoUTP. It saves all sent messages until acknowledgement for each received, resends lost and buffers received messages in case of broken sequence.
In my tests combination "RoUTP + GKMatchSendDataUnreliable" works even beter than "RoUTP + GKMatchSendDataReliable" (and of course better than pure GKMatchSendDataReliable which is not really reliable).
It nearly 100% reliable but maybe not what you need sometimes… For example you dropped out of network all the stuff that you send via GKMatchSendDataReliable will be sent in the order you've send them.
This is brilliant for turn-based games for example, but if fast reaction is necessary a dropout of the network would not just forget the missed packages he would get all the now late packages till he gets to realtime again.
The case GKMatchSendDataReliable doesn't send the data is a connection time out.
I think this would be also the case when you close the app
Im trying to communicate with 2 xmpp clients but this is not like messaging or chatting. It's more like event caused at one end and action performed at other (realtime). I wish there will not be any latency time when a Client A is trying to send packets to Client B. If available will there be any possible way to minimalize that it should be un noticed.? Is it possible to do this or by any other means?
First of all, that is still messaging.
As for you latency, there will always be some latency when sending data between processes. You haven't said what tolerance levels you are looking for as opposed to what you are getting so it is hard to say what you should do to improve them.
The biggest factors to any current latency you have will be message size and network speed. Of course direct point to point communication would remove one hop for you message, but without knowing your application there is no way of saying whether this is an acceptable direction.
A small message should be delivered in a few milliseconds on a fast network. If it is a slow network, then your problems lie outside of any communications protocol.
I've got a really busy self-hosted WCF server that requires 2000+ clients to update their status on a frequent basis. What I'm finding is that the CPU utilization of the server is sitting at around 70% constantly, and the clients have a 50% chance of actually getting a connection to the server. They will timeout after 60 seconds. This is problematic because if the server doesn't hear back from a client, it'll assume the client is offline.
I've implemented throttling so I can adjust concurrent connections/sessions/etc., but if I'm not mistaken, increasing this will only lead to higher CPU utilization and worse connectivity problems. Right?
Will increasing the timeout to something more than 60 seconds help? I'm not exactly sure how it works, but will a client sit in a type of queue until the server can field the request? Or is it best to set the timeout to something smaller and make the client check in more often if it can't get connected (this seems like it could only make the problem worse in a sense)?
If it's really important for the server to know if the client is still connected, I don't think relying solely on WCF is your best bet for that.
Maybe your server should have some sort of ping mechanism that either allows it to ping client machines based on some sort of timer or vice versa.
If you're super concerned about the messages always getting through, no matter what, then I suggest exploring Reliable services. Check out the enableReliableSession behavior attribute. I suggest reading through at least the first chapter in Juval Lowy's Programming WCF Services which is available for free as the Kindle sample of the book.
Increasing the timeout may help, but probably not much, and the Amazing Ever-Increasing Timeout is kind of a motif on http://www.thedailywtf.com . Making the client hammer the server if it can't get through the first time is guaranteed to cause pain.
If all that you care about is knowing whether the client is there, might it be practical to go down a layer or two, and have the client send you an HTTP POST once in a while? WCF requires some active back-and-forth, but a POST can just lay there until your server has time to deal with it, and the client can just send it and forget about it.