When should we update players about gameplay change during worldupdate? - game-engine

So here is the situation with our proprietary game engine:
We have a multiplayer online game developed in our proprietary game engine. There are 100s of players who are interacting with each other. When something changes among them, game updates all the players every 1/6 of a second.
Right now, each tick we update all the game objects executing gameplay logic. And if any event happens that needs to show something to the player, we instnatly send the message to that player or whole lobby.
so in order to optimize that, will it be good to send such updates in the world update event or keep it as is how it's doing?
So here is more details:
gameloop {
// #1 tick every alive game obj
// (send player message if there is anything new like screen msg etc)
// #2 collision test
// #3 handle collision on each pair
// (send player message if there is anything new like screen msg etc)
// #4 update each player with the change in world every 1/6th of a second
}
so when exactly should I update players? at #1 & #2? or just when #4 happens everyt 1/6th? The question is for server performance improvement only.
Edit: If anyone has anything to ask please use comment section. Do not close my question please.

Related

How to detect network disconnects (on an RTCPeerConnection) as soon as possible or the resulting frozen video?

I am using RTCPeerconnections to submit video and audio in a web-RTC-based video-messenger. I am able to detect network disconnects after approximately 7 seconds - yet, this is 7 seconds in which the user is staring at a frozen video and starts to randomly click buttons in the app. I would like to improve the user experience by shortening this timespan - e.g. by informing the user about a network issue if the video freezes for more than 1 second.
Status Quo: I am currently detecting respective situations by listening to the onconnectionstatechange event of the RTCPeerConnection. Yet, the event is only fired approximately 7 seconds after the disconnect. I determined the ~7 seconds by connecting two machines via normal WiFi, using a hardware switch on one of the laptops to switch off the wireless (such switches exist on some older Lenovo models / guarantee an immediate disconnect) and wait for the other machine to detect the event.
Consideration: The root cause being the interruption of the underlying network connection, it would be ideal to detect the changed network status as early as possible (even if its just transport delays). This said, the disturbance faced by the user ultimately stems from the video that instantly freezes when interrupting the network. If there was no way to detect the connection issue earlier, it could be an option to detect the frozen video instead. Is any of these two things possible (ideally event-driven, so that I don't need to poll things every second)?
Here's a very simple code snippet describing my current disconnect detection:
myRTCPeerConnection.onconnectionstatechange = (event: Event) => {
let newCS = myRTCPeerConnection.connectionState;
if (newCS == "disconnected" || newCS == "failed" || newCS == "closed") {
//do something on disconnect - e.g. show messages to user and start reconnect
}
}
(ice)connectionstatechange is the right event in general.
If you want more granularity you'll need to poll getStats and looks for stats like framesReceived. But there is no guaranteed frame rate sent from the other side (e.g. in screensharing you go below 1/s).
While the actual ICE statistics like requestsSent seem more useful they happen much less frequently, only once per second and you can loose a packet or it comes late.
In general this is a question of how reliable the detection of the network failure is. If it is too aggressive you end up with a poor UX showing a warning too often.
You might not end up that is significantly better than at the cost of introducing complexity that you need to maintain.
Thanks Philipp for your insights - this pointed me into the right direction.
I'm now looking into using getStats to identify any freezes. At first sight, polling the framesPerSecond value seems most promising to me. The good thing: it reacts instantly upon disconnect - and - it still works when the underlying video stream is paused (i'm allowing the user to pause video submission / implemented it by setting all video tracks to enabled = false). I.e. even if the video tracks are disabled on the sending side, the receiving side still continues to receive the agreed frames per second.
As the usage of the getStats function appears weak on documentation at the time of this being written / there's rarely a simple examples for its usage, please find my code extract below:
peerRTCPC
.getReceivers()
.forEach(
(
receiver: RTCRtpReceiver,
index: number,
array: RTCRtpReceiver[]
) => {
if (receiver.track.kind == "video") {
receiver.getStats().then((myStatsReport: RTCStatsReport) => {
myStatsReport.forEach(
(statValue: any, key: string, parent: RTCStatsReport) => {
if (statValue.type == "inbound-rtp") {
console.log(
"The PC stats returned the framesPerSecond value " +
statValue["framesPerSecond"] +
" while the full inbound-rtp stats reflect as " +
JSON.stringify(statValue)
);
}
}
);
});
}
}
);
Note that upon disconnect, the framesPerSecond do not necessarily go to zero, even though the webRTCInternals screen suggests the same. I am seeing undefined when a disconnect happens.
Runtime impact of polling this at high frequency / across larger numbers of connections probably needs to be looked at more closely. Yet, this seems like a good step into the right direction unless doing it way to frequently.

Google Play Services Multiplayer matching players errors

I developed a game for android with the google play services realtime multiplayer feature. I currently have a problem when matching the players. I dont use any invite feature, so all players just use the automatch functionality.
My game can be played with 4 players, but games with just 3 or 2 players are also possible. For my testing with 2 devices i use:
RoomConfig.createAutoMatchCriteria(minNumberOfOpponents, maxNumberOfOpponents, 0);
If i keep starting, ending and restarting games for a number of times, it often happens that the clients are not connected correctly. In the working cases the games onRoomConnected is called correctly and the game starts. In some cases tho, this is not happening. In theses cases, one device finds the other device and its onPeerJoined() and onRoomConnecting() callback is called. onRoomConnected() is never called tho. Thats because the other device gets no information whatsoever, just the roomCreated callback is called, and thats it.
So one device finds the other, and gets the information that another device joined the room. It also gets informed when this device leaves the room again. But the other device doesnt recognize any of this.
If this helps. i had some issues with losing connection before, and fixed it by restarting the apiClient everytime a room left on any clint. I dont think this is related tho.
I thought that might be a problem with leaving the current room correctly, and somehow joining the old room again, but it also happens ehen starting the app for the first time. Also the apiClient reconnect should avoid this problem
Thx in advance
Edit: It seems like its just my nexus 5 which produces the error. Every other device i tested works fine. The Nexus 5 does too in most cases. If the clients get connected and the game starts, there has never been any problem. The errrr just happens on this one device, and only in maybe 5 out of 6 cases, when searching an online game.
It just stops getting any callbacks called, sometimes right after the onRoomCreated(), sometime after he found another peer and onRoomConnecting(), and sometimes after onRoomConnected() has been called.
The other device gets its appropiate callbacks called tho in these cases.
So if the error device stops at onRoomCreated() the other device finds the client.
If the error device finds the other device and gets onRoomConnecting() called and stops after, the other device gets its onRoomConnected().
And if the error device gets its onRoomConnected() called, it sometimes even stops getting any messages from there on, while the other device is already in the game.
Maybe this helps someone. i'm not 100 % sure i fixed my problem. Haven't tested it in depth, but it seems everything is working fine now.
My problem was, that i have 2 different threads in my application, where the standard activity GUI thread starts the apiClient and handles the callbacks, while the gameengine thread initiates the room creation and sends the reliable messages via the apiClient.
It seems, sometimes things mess up while the peers connect and trade their first data. Currently i avoided directly calling any apiClient actions from the gameengine thread, but use runOnGuiThread to handle these actions on the Activity Gui Thread.

NetStream creating a seamless dynamic playlist AS3

I need to create a playlist dynamically with near seamless transitions in AS3.
I have tried to use the play2 command with .APPEND. It does work in a non dynamic setting.
But what I have is this, at the launch of the application, I know what the first video is, then, before that video ends, I will know what the next video to play will be and so on until i get the message that I played the last video.
So, at the beginning, I do not know how many videos there will be, neither do I know the order of the files that will play.
If I try to add a video with APPEND while the stream is already playing, it seems to replace the currently playing video instead of starting to buffer and play only at the end of the current video.
I also can not use appendBytes as the video files have to be in h.264 format
Anyone's help would be greatly appreciated as I do not know in which direction to look anymore. I can give more details if necessary.
Thank you very much.
This is a bit of an off-the-cuff answer, but the logic is sound & should give you another direction to pursue.
Firstly, the concept: with Flash video you have 2 completely separate processes occurring simultaneously:
buffering / loading
the video playing
Thus, playing & streaming can & do occur simultaneously, but separately & that is where the logic should be hooked in.
So, on to the implementation: would be to have a primary player, and a secondary (shadow) player / loader. The primary player should be responsible for loading the initial video & playing it.
[& here comes the magic]
Once buffering in the primary player is complete, determined by the NetStream.Buffer.Flush NetStatusEvent on the NetStream object. Then begin buffering the following video in the shadow player, initialising the connection & using NetStream.Pause, to begin buffering, but not playing, while the primary player plays out.
When playing is complete in the primary player (determined by the NetStream.Play.Stop event) you can pass the variables (NetConnention, NetStream & Video) (always passed by reference) from the shadow player over to the primary player & it should continue practically seamlessly. Then clear the values from the shadow player & repeat the above process, waiting for buffering to complete before loading the next video; ad infinitum.
Alternatively, you can have a more balanced approach - although in my mind this will be more resource intensive (as you'll have 2 video players continually active) - and have a primary & secondary player, where they alternate. As soon as one buffer is complete, you begin buffering the next, as soon as playing is complete, you switch from one player to the other.
This will be be pretty fiddly to assemble (hence the lack of an example, as it is complicated, and in essence, your job ;) as you'll be dealing with 2 sets of NetConnections, NetStreams & Videos - which are complicated to begin with, lots of events that require handling...
But, I don't think play2() is your answer here, that is used primarily to reawaken broken/closed NetConnections. The problem that faces you here is seamless asynchronisation of 2 separate NetConnections & NetStreams.
Ping me if you still need assistance/explanation here, this is a bit of an old Q & I don't want to write a few hundred lines of code if you've already moved on...
Best, a.)

Desing pattern for background working app

I have created a web-service app and i want to populate my view controllers according to the response i fetch(via GET) in main thread. But i want to create a scheduled timer which will go and control my server, if there becomes any difference(let's say if the count of an array has changed) i will create a local notification. As far as i read from here and some google results, i cant run my app in background more then ten minutes expect from some special situations(Audio, Vo-IP, GPS).. But i need to control the server at least one per minute.. Can anyone offer some idea-or link please?
EDIT
I will not sell the app in store, just for a local area network. Let's say, from the server i will send some text messages to the users and if a new message comes, the count of messages array will increment, in this situation i will create a notification. I need to keep this 'controlling' routing alive forever, whether in foreground or background. Does GCD give such a solution do anyone have any idea?
Just simply play a mute audio file in loop in the background, OR, ping the user's location in the background. Yes, that will drain the battery a bit, but it's a simple hack for in-home applications. Just remember to enable the background types in your Info.plist!
Note: "[...] I fetch (via GET) in main thread." This is not a good approach. You should never fetch any network resources on the main thread. Why? Because your GUI, which is maintained by the main thread, will become unresponsive whenever a fetch isn't instantaneous. Any lag spike on the network results in a less than desirable user experience.
Answer: Aside from the listed special situations, you can't run background apps. The way I see it:
Don't put the app in the background. (crappy solution)
Try putting another "entity" between the app and the "server". I don't know why you "need to control the server at least one per minute" but perhaps you can delegate this "control" to another process outside the device?
.
iOS app -> some form of proxy server -> server which requires
"babysitting" every minute.

iOS: Handling overlapping background requests

In an iOS app, I'm writing a class that will be messaged, go do a background request (via performSelectorInBackground:withObject:), and then return the result through a delegate method (that will then be displayed on a map). Everything seems to work right when one request is happening at a time, but I'm trying to figure out how to handle multiple overlapping requests. For example, if a user enters something in a search box that starts a background thread, and then enters something else before the initial background thread completes, how should this be handled?
There are a few options (don't let the second request start while the first is in progress, stop the first as soon as the second is requested, let both run simultaneously and return independent results, etc.), but is there a common/recommended way to deal with this?
I don't think there's universal answer to this. My suggestion is to separate tasks (in form of NSOperations and/or blocks) by their function and relationships between them.
Example: you don't want add image resizing operation to the same queue with fetching some unrelated feed from web, especially if no relationship between them exists. Or maybe you do because both require great amount of memory and because of that can't run in parallel.
But you'd probably want to add web image search operations to same queue while canceling operations of the same type added to this queue before. Each of those image search operations might init image resize operation and place it into some other queue. Now you have an relationship and have to cancel resizing in addition to image search operation. What if image search operation takes longer than associated resize operation? How do you keep a reference to it or know when it's done?
Yeah, it gets complicated easily and sorry if I didn't give you any concrete answers because of uniqueness of each situation but making it run like a Swiss clock in the end is very satisfying :)