SUMO sim time and real time difference - sumo

I used traci.simulation.getTime to get the current sim time of SUMO.
However, this time runs faster than real time.
For example, when sim time grow from 0-100, real time just grow from 0-20.
How can I make SUMO simulation time be the same with real time?
I tried --step-length = 1, but this didn't work

The --step-length property is a value in seconds describing the length of one simulation step. If you put a higher number here vehicles have less time to react, but your simulation probably runs faster.
For the real time issue you might have a look to the sumo-user mailinglist. I think the mail gives a pretty good answer to your issue:
the current limit to the real time factor is the speed of your
computer. If you want to slow the GUI down you can change the delay
value (which is measured in milliseconds) so a value of 100 would add
100ms to every simulation step (if you simulation is small and runs
with the default step length of 1s this means factor 10). If you want
to speed it up, run without GUI or buy a faster computer ;-).
To check how close your simulation is to wall clock time, you can check the generated output from SUMO. The thing you're looking for is called Real time factor

Related

Recommended way of measuring execution time in Tensorflow Federated

I would like to know whether there is a recommended way of measuring execution time in Tensorflow Federated. To be more specific, if one would like to extract the execution time for each client in a certain round, e.g., for each client involved in a FedAvg round, saving the time stamp before the local training starts and the time stamp just before sending back the updates, what is the best (or just correct) strategy to do this? Furthermore, since the clients' code run in parallel, are such a time stamps untruthful (especially considering the hypothesis that different clients may be using differently sized models for local training)?
To be very practical, using tf.timestamp() at the beginning and at the end of #tf.function client_update(model, dataset, server_message, client_optimizer) -- this is probably a simplified signature -- and then subtracting such time stamps is appropriate?
I have the feeling that this is not the right way to do this given that clients run in parallel on the same machine.
Thanks to anyone can help me on that.
There are multiple potential places to measure execution time, first might be defining very specifically what is the intended measurement.
Measuring the training time of each client as proposed is a great way to get a sense of the variability among clients. This could help identify whether rounds frequently have stragglers. Using tf.timestamp() at the beginning and end of the client_update function seems reasonable. The question correctly notes that this happens in parallel, summing all of these times would be akin to CPU time.
Measuring the time it takes to complete all client training in a round would generally be the maximum of the values above. This might not be true when simulating FL in TFF, as TFF maybe decided to run some number of clients sequentially due to system resources constraints. In practice all of these clients would run in parallel.
Measuring the time it takes to complete a full round (the maximum time it takes to run a client, plus the time it takes for the server to update) could be done by moving the tf.timestamp calls to the outer training loop. This would be wrapping the call to trainer.next() in the snippet on https://www.tensorflow.org/federated. This would be most similar to elapsed real time (wall clock time).

Speech recognition syllable counter

I'm wanting help in finding or creating an app or program that counts syllables with voice recognition. The application is for stuttering therapy; slowing the speech down,and joining/slurring words at slow speed (50 syllables per minute), and then slowly speeding up while practicing a modified way of speaking. Spending 2 days at 50 SPM (syllable per minute), and then the next few days at 80 spm, then next at 100 and so on. Until the stutterer is talking at 180 - 200 syllables per minute (normal speed) but with a modified speech pattern (smooth speech) which significantly reduces the stutter. In the past I have used a hand held device, and manually counted the syllables, and told the speaker to slow down or speed up, depending on their syllable count.
There is a praat script to detect syllable nuclei that has been published and validated. You may be able to use this script for your work, or it may be a good starting point for you to write your own.
You may also be interested in the Android app SpeakRite, which displays speaking rate in real time.

Average time per step is constant despite a variable delay

I've made a cellular automaton (Langton's ant FYI) on VBA. At each step, there is a Sleep(delay) where delay is a variable. I've also added DoEvents at the end of a display function to ensure that each step is shown on screen.
With a Timer I can monitor how long one step require in average. The result is plotted on the graph bellow (Y-axis : Time per step (in ms). X-axis : delay (in ms))
Could you explain to me why it looks like that? Especially why does it remain steady ? Because IMO, I'm suppose to have (more or less) a a straight line.
I got these results whitout doing anything else on my computer during the whole process.
Thank you in advance for your help,
That would be because the Sleep API is based off of the system clock. If the resolution of your clock is lower than the time you are sleeping, then it will round up to the nearest resolution of your system clock. You may be able to call timeGetDevCaps to see the minimum timer resolution of your system.
Think about it this way. You have a normal watch that only includes your usual Hour/Minute/Second hand (no hands for 1/1000, etc). You are wanting to time half a second, but your watch only moves in 1 second intervals - hence your watch's resolution is 1 tick per second. You would not know that half a second has actually passed by until the full second passes by due to this resolution, so it's rounded to the next tick.

Understanding simple simulation and rendering loop

This is an example (pseudo code) of how you could simulate and render a video game.
//simulate 20ms into the future
const long delta = 20;
long simulationTime = 0;
while(true)
{
while(simulationTime < GetMilliSeconds()) //GetMilliSeconds = Wall Clock Time
{
//the frame we simulated is still in the past
input = GetUserlnput();
UpdateSimulation(delta, input);
//we are trying to catch up and eventually pass the wall clock time
simulationTime += delta;
}
//since my current simulation is in the future and
//my last simulation is in the past
//the current looking of the world has got to be somewhere inbetween
RenderGraphics(InterpolateWorldState(GetMilliSeconds() - simulationTime));
}
That's my question:
I have 40ms to go through the outer 'while true' loop (means 25FPS).
The RenderGraphics method takes 10ms. So that means I have 30ms for the inner loop. The UpdateSimulation method takes 5ms. Everything else can be ignored since it's a value under 0.1ms.
What is the maximum I can set the variable 'delta' to in order to stay in my time schedule of 40ms (outer loop)?
And why?
This largely depends on how often you want and need to update your simulation status and user input, given the constraints mentioned below. For example, if your game contains internal state based on physical behavior, you would need a smaller delta to ensure that movements and collisions, if any, are properly evaluated and reflected within the game state. Also, if your user input requires fine-grained evaluation and state update, you would also need smaller delta values. For example, a shooting game with analogue user input (e.g. mouse, joystick), would benefit from update frequencies larger than 30Hz. If your game does not need such high-frequency evaluation of input and game state, then you could get away with larger delta values, or even by simply updating your game state once any input by the player was being detected.
In your specific pseudo-code, your simulation would update according to a fixed time-slice of length delta, which requires your simulation update to be processed in less wallclock time than the wallclock time to be simulated. Otherwise, wallclock time would proceed faster, than your simulation time can be updated. This ultimately limits your delta depending on how quick any simulation update of delta simulation time can actually be computed. This relationship also depends on your use case and may not be linear or constant. For example, physics engines often would divide your delta time given internally to what update rate they can reasonably process, as longer delta times may cause numerical instabilities and harder to solve linear systems raising processing effort non-linearly. In other use cases, simulation updates may take a linear or even constant time. Even so, many (possibly external) events could cause your simulation update to be processed too slowly, if it is inherently demanding. For example, loading resources during simulation updates, your operating system deciding to lay your execution thread aside, another process run by the user, anti-virus software kicking in, low memory pressure, a slow CPU and so on. Until now, I saw mostly two strategies to evade this problem or remedy its effects. First, simply ignoring it could work if the simulation update effort is low and it is assumed that the cause of the slowdown is temporary only. This would result in more or less noticeable "slow motion" behavior of your simulation, which could - in worst case - lead to simulation time lag piling up forever. The second strategy I often saw was to simply cap the measured frame time to be simulated to some artificial value, say 1000ms. This leads to smooth behavior as soon as the cause of slow down disappears, but has the drawback that the 'capped' simulation time is 'lost', which may lead to animation hiccups if not handled or accounted for. To choose a strategy, analyzing your use case could consist of measuring the wallclock time it takes to process simulation updates of delta and x * delta time and how changing the delta time and simulation load to process actually reflects in wallclock time needed to compute it, which will hint you to what the maximum value of delta is for your specific hardware and software environment.

Record Cpu usage per minute for particular process

I want to record Cpu usage ,cpu time ,VM size in notepad per minute for any particular process(not for all.Is there any way to this,because i work as a performance/stress tester and its my duty to take the cpu performance after at particular time and the script takes more time so it is some time inconvenient to me take the all reading
please suggest.
thank u.
Use performance monitor, if the Windows is the system that you are working on. It has all kinds of log options, and will do what you need.
Performance monitor gives you the option of recording the performance data for particular process. Look under 'process'...