How to synchronize host's and client's games in an online RTS game? (VB.NET) - vb.net

So my project is an online RTS (real-time strategy) game (VB.NET) where - after a matchmaking server matched 2 players - one player is assigned as host and the other as client in a socketcommunication (TPC).
My design is that server and client only record and send information about input, and the game takes care of all the animations/movements according to the information from the inputs.
This works well, except when it comes to having both the server's and client's games run exacly at the same speed. It seems like the operations in the games are handled at different speeds, regardless if I have both client and server on the same computer, or if I use different computers as server/client. This is obviously crucial for the game to work.
Since the game at times have 300-500 units, I thought it would be too much to send an entire gamestate from server/client 10 times a second.
So my questions are:
How to synchronize the games while sending only inputs? (if possible)
What other designs are doable in this case, and how do they work?
(in VB.NET i use timers for operations, such that every 100ms(timer interval) a character moves and changes animation, and stuff like that)
Thanks in advance for any help on this issue, my project really depends on it!

Timers are not guaranteed to tick at exactly the set rate. The speed at which they tick can be affected by how busy the system and, in particular, your process are. However, while you cannot get a timer to tick at an accurate pace, you can calculate, exactly, how long it has been since some particular point in time, such as the point in time when you started the timer. The simplest way to measure the time, like that, is to use the Stopwatch class.
When you do it this way, that means that you don't have a smooth order of events where, for instance, something moves one pixel per timer tick, but rather, you have a path where you know something is moving from one place to another over a specified span of time and then, given your current exact time according to the stopwatch, you can determine the current position of that object along that path. Therefore, on a system which is running faster, the events would appear more smooth, but on a system which is running slower, the events would occur in a more jumpy fashion.

Related

How to handle saving large data in a distributed environment

In Elixir (and Erlang), we are encouraged to use processes to divide the work and have them communicate with small messages. Somehow though, I also need to handle not-so-small data which might not only be useful in a single process, and I'm unsure how. Here's my use case:
I've designed a simple card game which allows multiple players to join the same game through their browser, but also to create new ones. Basically, I'm keeping the card game in a process (so I create a new process whenever a player asks to create a new game). I would also like my processes to somehow save the card game on disk (or whatever storage is available). My first reaction was to avoid doing this in the game process itself, so it wouldn't "slow down" my game too much, since while the disk is accessed to write the game down (and serialization has to occur), messages sent by players to the game will be delayed. So I thought I'd create "save" processes whose job was simple: to handle the card game for a given game and store it on disk. These processes would be servers reacting to casts, so that the game process could just hand over the data whenever an action occurred (here: that's my card game, save it and well, I'm off). And now arises another problem: the card game has to be sent over the network (which might be a bit long, if on a different node). This might slow down process communication. In fact, it might also slow down the heartbeat of individual processes.
My games aren't that large. At current testing, they weigh about 4k. And yet, 4k of data might be a lot on a slow network. (Don't take network speed for granted.) I don't think I really have to worry, in my situation (I could actually save the game directly in the game process and save the trouble, it won't slow down my game that much) but I'm interested in solutions and I'm coming up blank.
The advantage of "save" processes was that they could live on another node: if the game process crashed, one would be recreated dynamically and ask save processes if anyone had the copy of game ID 121. If the save process crashed, the game processes could send their updated copy to another process/node. It seemed like a good way to keep things in good state. Of course, having a game process and save process crash at the same time would ruin some data, but there's so much one can do in a distributed environment (or any environment, for that matter). Plus, in this scenario, communication between the node(s) hosting the games (it can be spread on several nodes) and the node(s) saving data wouldn't have to be particularly fast, since the only communication would be one-sided and incremental (unless an error occurred, as described).
This is more a theoretical question. Elixir (or Erlang) isn't the only way to create distributed system, though the large message system and heartbeat might be different. Still, I would like to hear thoughts on ways to improve my system to handle data saving.
Thanks for your answers,
I think the main issue here is how to save large data without blocking and without causing a backlog.
Blocking can happen if the main process also does the saving, but if it hands it off to a separate process, that can cause a backlog and possible data loss in case of crash.
The best forward for this I can think of is to not save the whole state every time, but save each mutation to the game state as an individual event, and have some logic to recreate the state from individual events when trying to restore state.
To optimize this further, the "save process" can also periodically dump the whole state, so that the max number of entries to roll up on recovery is limited.
What I described here is a very basic version of how many databases write transactions first to an append-only log file, and roll it up in batches later.

Memory efficient way of using an NSTimer to update a MenuBar Mac application?

I am creating an application for Mac, in Objective C, which will run in the Menu-bar and do periodic Desktop operations (such as changing the wallpaper). I am creating the application so that it stays in the Menu bar at all times, allowing easy access to configuration options and other information. My main concern is how to schedule my app to run every X minutes to do the desktop operations.
The most common solution I have seen is using NSTimer, however, I am concerned that it will not be memory efficient (after reading the following page on Apple Developer docs. Using an NSTimer will prevent the laptop from going to sleep, and will need an always-running thread to check for when the NSTimer has elapsed. Is there a more memory-efficient way of using NSTimer to schedule these operations?
Alternately, is there a way to use LaunchD to initiate a call to my application (which is in the Menu bar) so that it can handle the event and do the desktop operations. I think that the second way is better, but am not sure if it is possible.
First, excellent instincts on keeping this low-impact. But you're probably over-worried in this particular case.
When they say "waking the system from an idle state" they don't mean system-level "sleep" where the screen goes black. They mean idle state. The CPU can take little mini-naps for fractions of a second when there isn't work that immediately needs to be done. This can dramatically reduce power requirements, even while the system is technically "awake."
The problem with having lots of timers flying around isn't so much their frequencies as their tolerances. Say one you have 10 timers with a 1 second frequency, but they're offset from each other by 100ms (just by chance of what time it was when they happened to start). That means the longest possible "gap" is 100ms. But if they were configured at 1 second with a 0.9 second tolerance (i.e. between 1s and 1.9s), then the system could schedule them all together, do a bunch of work, and spend most of the second idle. That's much better for power.
To be a good timer citizen, you should first set your timer at the interval you really want to do work. If it is common for your timer to fire, but all you do is check some condition and reschedule the timer, then you're wasting power. (Sounds like you already have this in hand.) And the second thing you should do is set a reasonable tolerance. The default is 0 which is a very small tolerance (it's not actually "0 tolerance," but it's very small compared to a minutes). For your kind of problem, I'd probably use a tolerance of at least 1s.
I highly recommend the Energy Best Practices talk from WWDC 2013. You may also be interested in the later Writing Energy Efficient Code sessions from 2014 and Achieving All-day Battery Life from 2015.
It is possible of course to do with with launchd, but it adds a lot of complexity, especially on installation. I don't recommend it for the problem you're describing.

is it possible to synchronize two computers at better than 1 ms accuracy using any internet protocols?

Say I have two computers - one located in Los Angeles and another located in Boston. Are there any known protocols or linux commands that could synchronize both of those computer clocks to better than 1 ms and NOT use GPS at all? I am under the impression the answer is no (What's the best way to synchronize times to millisecond accuracy AND precision between machines?)
Alternatively, are there any standard protocols (like NTP) in which the relative time difference could be known accurately between these two computers even if the absolutely time synchronization is off?
I am just wondering if there are any free or inexpensive ways to get better than 1 ms time accuracy without having to resort to GPS.
I don't know of any known protocol (perhaps there is) but can offer a method similar to the way scientists measure speed near that of light:
Have a process on both servers "pinging" the other server and waiting for a response, and timing the time it took for a response. Then start pinging periodically exactly when you expect the last ping to come in. Averaging (and discarding any far off samples) you will have the two servers after a while "thumping away" at the same rhythm. The two can also tell how much time between each "beat" is taking for each of them at a very high accuracy, by dividing the count of beats in the (long) period of time.
After the "rhythm" is established, if you know that one of the server's time is correct, or you want to use its time as the base, then you know what time it is when your server's signal reaches the other server. Together with the response it sends you what time IT has. You can then use that time to establish synchronization with your time system.
Last but not least, most operating systems give the non-kernel user the ability to act only in at least 32 milliseconds of accuracy, that is: you cannot expect something to happen exactly within less milliseconds than that. The only way to overcome that is to have a "native" DLL that can react and run with the clock. That too will give you only a certain speed of reaction, depending on the system (hardware and software).
Read about Real-Time systems and the "server" you are talking about (Windows? Linux? Embedded software on a microchip? Something else?)

Rewind a SKPhysicsWorld

Lets say I have a SKScene with no gravity, and a single sprite that has been given an impulse and is slowly moving it's way through that SKScene. Lets say I'm at the 10 second mark, and I'd like to see where my sprite would be if I had given it a nudge at the 5 second mark. In otherwords, I'd like to quickly rewind the scene to the 5 second mark, apply an impulse, and then quickly fast forward it back to the 10 second mark. Ideally, the user wouldn't need to see this activity, it would just be a jump to the new position/vector of the sprite.
The problem I'm trying to solve here is how to do a persistent MMO 2d space game using Sprite Kit and a Websocket server. And I'm a little bit stuck on how to keep all the players in sync with each other and still have relatively latency free play. I considered:
Game Kit -- Not enough players, not persistent
Client sends commands (thrust, turn, etc) and Server manages physics, responding with new location / vector
If I send constant position updates for all objects, this doesn't seem efficient, as in space all objects will constantly be moving / rotating so this will result in a huge stream.
If I send updated vectors, local objects will be constantly stuttering and jumping around as the amount of latency per each update will be variable based on network latency
Client sends commands, Server records the commands, gives them an official timestamp, and sends the commands down to the client, client then matches the official timestamp with their own running time code, rewind their local physics to the time the command was sent, make the related change and then fast forward back to where the user was. This might result in a single ship jumping a little bit but could be smoother.

Calculating number of seconds between two points in time, in Cocoa, even when system clock has changed mid-way

I'm writing a Cocoa OS X (Leopard 10.5+) end-user program that's using timestamps to calculate statistics for how long something is being displayed on the screen. Time is calculated periodically while the program runs using a repeating NSTimer. [NSDate date] is used to capture timestamps, Start and Finish. Calculating the difference between the two dates in seconds is trivial.
A problem occurs if an end-user or ntp changes the system clock. [NSDate date] relies on the system clock, so if it's changed, the Finish variable will be skewed relative to the Start, messing up the time calculation significantly. My question:
1. How can I accurately calculate the time between Start and Finish, in seconds, even when the system clock is changed mid-way?
I'm thinking that I need a non-changing reference point in time so I can calculate how many seconds has passed since then. For example, system uptime. 10.6 has - (NSTimeInterval)systemUptime, part of NSProcessInfo, which provides system uptime. However, this won't work as my app must work in 10.5.
I've tried creating a time counter using NSTimer, but this isn't accurate. NSTimer has several different run modes and can only run one at a time. NSTimer (by default) is put into the default run mode. If a user starts manipulating the UI for a long enough time, this will enter NSEventTrackingRunLoopMode and skip over the default run mode, which can lead to NSTimer firings being skipped, making it an inaccurate way of counting seconds.
I've also thought about creating a separate thread (NSRunLoop) to run a NSTimer second-counter, keeping it away from UI interactions. But I'm very new to multi-threading and I'd like to stay away from that if possible. Also, I'm not sure if this would work accurately in the event the CPU gets pegged by another application (Photoshop rendering a large image, etc...), causing my NSRunLoop to be put on hold for long enough to mess up its NSTimer.
I appreciate any help. :)
Depending on what's driving this code, you have 2 choices:
For absolute precision, use mach_absolute_time(). It will give the time interval exactly between the points at which you called the function.
But in a GUI app, this is often actually undesirable. Instead, you want the time difference between the events that started and finished your duration. If so, compare [[NSApp currentEvent] timestamp]
Okay so this is a long shot, but you could try implementing something sort of like NSSystemClockDidChangeNotification available in Snow Leopard.
So bear with me here, because this is a strange idea and is definitely non-derterministic. But what if you had a watchdog thread running through the duration of your program? This thread would, every n seconds, read the system time and store it. For the sake of argument, let's just make it 5 seconds. So every 5 seconds, it compares the previous reading to the current system time. If there's a "big enough" difference ("big enough" would need to definitely be greater than 5, but not too much greater, to account for the non-determinism of process scheduling and thread prioritization), post a notification that there has been a significant time change. You would need to play around with fuzzing the value that constitutes "big enough" (or small enough, if the clock was reset to an earlier time) for your accuracy needs.
I know this is kind of hacky, but barring any other solution, what do you think? Might that, or something like that, solve your issue?
Edit
Okay so you modified your original question to say that you'd rather not use a watchdog thread because you are new to multithreading. I understand the fear of doing something a bit more advanced than you are comfortable with, but this might end up being the only solution. In that case, you might have a bit of reading to do. =)
And yeah, I know that something such as Photoshop pegging the crap out of the processor is a problem. Another (even more complicated) solution would be to, instead of having a watchdog thread, have a separate watchdog process that has top priority so it is a bit more immune to processor pegging. But again, this is getting really complicated.
Final Edit
I'm going to leave all my other ideas above for completeness' sake, but it seems that using the system's uptime will also be a valid way to deal with this. Since [[NSProcessInfo processInfo] systemUptime] only works in 10.6+, you can just call mach_absolute_time(). To get access to that function, just #include <mach/mach_time.h>. That should be the same value as returned by NSProcessInfo.
I figured out a way to do this using the UpTime() C function, provided in <CoreServices/CoreServices.h>. This returns Absolute Time (CPU-specific), which can easily be converted into Duration Time (milliseconds, or nanoseconds). Details here: http://www.meandmark.com/timingpart1.html (look under part 3 for UpTime)
I couldn't get mach_absolute_time() to work properly, likely due to my lack of knowledge on it, and not being able to find much documentation on the web about it. It appears to grab the same time as UpTime(), but converting it into a double left me dumbfounded.
[[NSApp currentEvent] timestamp] did work, but only if the application was receiving NSEvents. If the application went into the foreground, it wouldn't receive events, and [[NSApp currentEvent] timestamp] would simply continue to return the same old timestamp again and again in an NSTimer firing method, until the end-user decided to interact with the app again.
Thanks for all your help Marc and Mike! You both definitely sent me in the right direction leading to the answer. :)