Average time per step is constant despite a variable delay - vba

I've made a cellular automaton (Langton's ant FYI) on VBA. At each step, there is a Sleep(delay) where delay is a variable. I've also added DoEvents at the end of a display function to ensure that each step is shown on screen.
With a Timer I can monitor how long one step require in average. The result is plotted on the graph bellow (Y-axis : Time per step (in ms). X-axis : delay (in ms))
Could you explain to me why it looks like that? Especially why does it remain steady ? Because IMO, I'm suppose to have (more or less) a a straight line.
I got these results whitout doing anything else on my computer during the whole process.
Thank you in advance for your help,

That would be because the Sleep API is based off of the system clock. If the resolution of your clock is lower than the time you are sleeping, then it will round up to the nearest resolution of your system clock. You may be able to call timeGetDevCaps to see the minimum timer resolution of your system.
Think about it this way. You have a normal watch that only includes your usual Hour/Minute/Second hand (no hands for 1/1000, etc). You are wanting to time half a second, but your watch only moves in 1 second intervals - hence your watch's resolution is 1 tick per second. You would not know that half a second has actually passed by until the full second passes by due to this resolution, so it's rounded to the next tick.

Related

SUMO sim time and real time difference

I used traci.simulation.getTime to get the current sim time of SUMO.
However, this time runs faster than real time.
For example, when sim time grow from 0-100, real time just grow from 0-20.
How can I make SUMO simulation time be the same with real time?
I tried --step-length = 1, but this didn't work
The --step-length property is a value in seconds describing the length of one simulation step. If you put a higher number here vehicles have less time to react, but your simulation probably runs faster.
For the real time issue you might have a look to the sumo-user mailinglist. I think the mail gives a pretty good answer to your issue:
the current limit to the real time factor is the speed of your
computer. If you want to slow the GUI down you can change the delay
value (which is measured in milliseconds) so a value of 100 would add
100ms to every simulation step (if you simulation is small and runs
with the default step length of 1s this means factor 10). If you want
to speed it up, run without GUI or buy a faster computer ;-).
To check how close your simulation is to wall clock time, you can check the generated output from SUMO. The thing you're looking for is called Real time factor

How to read speed of waveform chart generated from labview?

I need to know the speed of waveform chart of labview
the program generate 2 wave form shifted by 90 i need to make program to find the speed of both
Neither waveform is "generated first". Every iteration of the loop will result in a different true/false value being placed onto the chart. On some iterations, the top one will update first; on other iterations, the bottom one will update first.
What you are seeing in the charts is NOT a coherent waveform. It is just a series of values that you have chosen to plot. There's no time data associated with this, just the values and an iteration count. The iteration counter is the clock of this algorithm, so, in that sense, both waveforms are generated at exactly the same rate at exactly the same time. (See below for comments about the Timed Loop.)
I doubt that this answers the question you think you are asking. You seem to want to know some information computed from these series of true/false values, but the terminology that you've used is not meaningful, and I cannot determine what information it is that you actually want.
I said earlier that the only clock for this algorithm is the iteration counter of the loop. You used a Timed Loop with a dt of 1. Are you on Windows? If so, then my statement is correct: The Timed Loop on Windows is only a simulation without any guarantee of timing, so you might as well be using a regular While Loop. If you are on a real-time OS with LabVIEW Real-Time module, then this is generating a point every 1 millisecond, so the speed of the iteration count is tied to the computer's clock, so the speed of both waveforms would be 1 millisecond.

Understanding simple simulation and rendering loop

This is an example (pseudo code) of how you could simulate and render a video game.
//simulate 20ms into the future
const long delta = 20;
long simulationTime = 0;
while(true)
{
while(simulationTime < GetMilliSeconds()) //GetMilliSeconds = Wall Clock Time
{
//the frame we simulated is still in the past
input = GetUserlnput();
UpdateSimulation(delta, input);
//we are trying to catch up and eventually pass the wall clock time
simulationTime += delta;
}
//since my current simulation is in the future and
//my last simulation is in the past
//the current looking of the world has got to be somewhere inbetween
RenderGraphics(InterpolateWorldState(GetMilliSeconds() - simulationTime));
}
That's my question:
I have 40ms to go through the outer 'while true' loop (means 25FPS).
The RenderGraphics method takes 10ms. So that means I have 30ms for the inner loop. The UpdateSimulation method takes 5ms. Everything else can be ignored since it's a value under 0.1ms.
What is the maximum I can set the variable 'delta' to in order to stay in my time schedule of 40ms (outer loop)?
And why?
This largely depends on how often you want and need to update your simulation status and user input, given the constraints mentioned below. For example, if your game contains internal state based on physical behavior, you would need a smaller delta to ensure that movements and collisions, if any, are properly evaluated and reflected within the game state. Also, if your user input requires fine-grained evaluation and state update, you would also need smaller delta values. For example, a shooting game with analogue user input (e.g. mouse, joystick), would benefit from update frequencies larger than 30Hz. If your game does not need such high-frequency evaluation of input and game state, then you could get away with larger delta values, or even by simply updating your game state once any input by the player was being detected.
In your specific pseudo-code, your simulation would update according to a fixed time-slice of length delta, which requires your simulation update to be processed in less wallclock time than the wallclock time to be simulated. Otherwise, wallclock time would proceed faster, than your simulation time can be updated. This ultimately limits your delta depending on how quick any simulation update of delta simulation time can actually be computed. This relationship also depends on your use case and may not be linear or constant. For example, physics engines often would divide your delta time given internally to what update rate they can reasonably process, as longer delta times may cause numerical instabilities and harder to solve linear systems raising processing effort non-linearly. In other use cases, simulation updates may take a linear or even constant time. Even so, many (possibly external) events could cause your simulation update to be processed too slowly, if it is inherently demanding. For example, loading resources during simulation updates, your operating system deciding to lay your execution thread aside, another process run by the user, anti-virus software kicking in, low memory pressure, a slow CPU and so on. Until now, I saw mostly two strategies to evade this problem or remedy its effects. First, simply ignoring it could work if the simulation update effort is low and it is assumed that the cause of the slowdown is temporary only. This would result in more or less noticeable "slow motion" behavior of your simulation, which could - in worst case - lead to simulation time lag piling up forever. The second strategy I often saw was to simply cap the measured frame time to be simulated to some artificial value, say 1000ms. This leads to smooth behavior as soon as the cause of slow down disappears, but has the drawback that the 'capped' simulation time is 'lost', which may lead to animation hiccups if not handled or accounted for. To choose a strategy, analyzing your use case could consist of measuring the wallclock time it takes to process simulation updates of delta and x * delta time and how changing the delta time and simulation load to process actually reflects in wallclock time needed to compute it, which will hint you to what the maximum value of delta is for your specific hardware and software environment.

Oxyplot: IsValidPoint on realtime LineSerie

I've been using oxyplot for a month now and I'm pretty happy with what it delivers. I'm getting data from an oscilloscope and, after a fast processing, I'm plotting it in real time to a graph.
However, if I compare my application CPU usage to the one provided by the oscilloscope manufacturer, I'm loading a lot more the CPU. Maybe they're using some gpu-based plotter, but I think I can reduce my CPU usage with some modifications.
I'm capturing 10.000 samples per second and adding it to a LineSeries. I'm not plotting all that data, I'm decimating it to a constant number of points, let's say 80 points for a 20 secs measure, so I have 4 points/sec while totally zoomed out and a bit more detail if I zoom in to a specific range.
With the aid of ReSharper, I've noticed that the application is calling a lot of times (I've 6 different plots) the IsValidPoint method (something like 400.000.000 times), which is taking a lot of time.
I think the problem is that, when I add new points to the series, it checks for every point if it is a valid point, instead of the added values only.
Also, it spends a lot of time in the MeasureText/DrawText method.
My question is: is there a way to override those methods and to adapt it to my needs? I'm adding 10.000 new values each second, but the first ones remain the same, so there's no need for re-validate them. Also, the text shown doesn't change.
Thank you in advance for any advice you can give me. Have a good day!

When to check for collision

I have a moving mars lander which I want to detect when it hits the top of a refuelling station in a game I'm making. I want to know the best way to know exactly when they hit.
What I am doing at the moment is I have 3 timers controlling horizontal, vertical and gravitational movement. Each timer makes the mars lander move a bit so I put the crash detection code at the top and bottom of each of these but since each one is fireing every 50 milliseconds a) it wont detect exactly when it collides and b) it will call the crash detection code a lot of times.
This is what one of the timers looks like and its pretty much the same for all 3 (I am making the game in vb 2008, this is just some pseudo-code):
gravity timer:
detectCrash()
Move ship
detectCrash()
End timer
what would be a more accurate way to see if they are colliding in terms of response time from the collide to the code which that calls.
how would I call the detection a lower amount of times?
I could possibly make another timer that fires every 10 milliseconds or so and check the collision but then this will run the code an awful lot of times (100 times a second) wont it?
I am also curious as to how large games would handle something like this which most likely happens many times a second.
Thanks.
One thing you could do is just use one timer to control all movement at 16 or 17 milliseconds (roughly 60 fps), but even that may be too fast, you should be fine with maybe 40 milliseconds (25 fps). In this timer event, calculate the new position of your object before moving, see if there is going to be a collision on the new position, and if not move the object.
At least that is how I would do it in vb.net. I would probably go with XNA however.
You don't want to have separate timers for moving the object and for detecting collisions. This approach has a couple of obvious problems:
It doesn't scale at all; imagine adding more objects. Would you have two timers for each (one each for the X and Y planes)? Would you add yet another timer to detect collisions for each object?
It's possible for the timers to get out of sync - imagine your code for moving your ship in the X plane takes much longer to run than the code for the Y plane. Suddenly your ship is moving vertically more often than it's moving horizontally.
'Real' games have a single loop, which eliminates most of your problems. In pseudo-code:
Check for user input
Work out new X and Y position of ship
Check for collisions
Draw result on screen
Goto 1
Obviously, there's more to it than that for anything more than a simple game! But this is the approach you want, rather than trying to have separate loops for everything you've got going on.