Repast: check execution time for each method - repast-simphony

My model is gradually slower down to an unacceptable speed(i.e. from 200 ticks per second to several seconds for one tick). I'd like to understand what the causes to this problem. What is a simplest way to check which part of the model is increasingly consuming the time? I tried used some other java profiler before but it's not good and difficault to understand.

A Java profiler like YourKit is the best way approach since it will provide the code "hots pots" in terms of the execution times for each class method. Alternatively, you can insert a few timing functions in parts of your model that you suspect contribute to most of the execution time, for example:
long start = System.nanoTime();
// some model code here
long end= System.nanoTime();
System.println("Step A time in seconds: " + (end - start)/1E9);

Related

How to add delay in Micro secs in tcl file?

I am using when command in tcl file, and after the condition is met I want to wait for some microseconds. I have found after, but the delay we specify for after is in milliseconds; it is not taking decimal values.
So is there any other way to add short delay in tcl file?
There's no native operation for that. If it is critical, you could busy-loop looking at clock microseconds…
proc microsleep {micros} {
set expiry [expr {$micros + [clock microseconds]}]
while {[clock microseconds] < $expiry} {}
}
I don't really recommend doing this as it is not energy efficient; such high precision waiting is rarely required in my experience (unless you're working on an embedded system with realtime requirements, an area where Tcl isn't a perfect fit).
Of course, you can also make a C wrapper round a system call like nanosleep(), and that might or might not be a better choice (and might or might not be more efficient)…

Confusion with writing a game loop

I'm working on a 2D video game framework, and I've never written a game loop before. Most frameworks I've ever looked in to seem to implement both a draw and update methods.
For my project I implemented a loop that calls these 2 methods. I noticed with other frameworks, these methods don't always get called alternating. Some frameworks will have update run way more than draw does. Also, most of these types of frameworks will run at 60FPS. I figure I'll need some sort of sleep in here.
My question is, what is the best method for implementing this type of loop? Do I call draw then update, or vice versa? In my case, I'm writing a wrapper around SDL2, so maybe that library requires something to be setup in a certain way?
Here's some "pseudo" code I'm thinking of for the implementation.
loop do
clear_screen
draw
update
sleep(16.milliseconds)
break if window_is_closed
end
Though my project is being written in Crystal-Lang, I'm more looking for a general concept that could be applied to any language.
It depends what you want to achieve. Some games prefer the game logic to run more frequently than the frame rate (I believe Source games do this), for some games you may want the game logic to run less frequently (the only example of this I can think of is the servers of some multiplayer games, quite famously Overwatch).
It's important to consider as well that this is a question of resolution, not speed. A game with logic rate 120 and frame rate 60 is not necessarily running at x2 speed, any time critical operations within the game logic should be done relative to the clock*, not the tic rate, or your game will literally go into slow motion if the frames take too long to render.
I would recommend writing a loop like this:
loop do
time_until_update = (update_interval + time_of_last_update) - current_time
time_until_draw = (draw_interval + time_of_last_draw) - current_time
work_done = false
# Update the game if it's been enough time
if time_until_update <= 0
update
time_of_last_update = current_time
work_done = true
end
# Draw the screen if it's been enough time
if time_until_draw <= 0
clear_screen
draw
time_of_last_draw = current_time
work_done = true
end
# Nothing to do, sleep for the smallest period
if work_done == false
smaller = time_until_update
if time_until_draw < smaller
smaller = time_until_draw
end
sleep_for(smaller)
end
# Leave, maybe
break if window_is_closed
end
You don't want to wait for 16ms every frame otherwise you might end up over-waiting if the frame takes a non-trivial amount of time to complete. The work_done variable is so that we know whether or not the intervals we calculated at the start of the loop are still valid, we may have done 5ms of work, which would throw our sleeping completely off so in that scenario we go back around and calculate fresh values.
* You may want to abstractify the clock, using the clock directly can have some weird effects, for example if you save the game and you save the last time you used a magical power as a clock time, it will instantly come off cooldown when you load the save, as that is now minutes, hours or even days in the past. Similar issues exist with the process being suspended by the operating system.

Hyperopt set timeouts and modify space during execution

if someone can help on:
How to set a timeout for each individual test ? a timeout for the total experiment ?
How to setup a progressive strategy which would eliminate/prune a % of worst scoring branches of search space at different stage of the experiment (while using current optimization algorithms) ? ie. at 30% of the max total experiment, it could remove 50% of the worst scoring classifiers and all its branch of hyperparameters to remove it from upcoming tests. Then, same process at 60%...
Thanks a lot!
Following my exchange on hyperopt's github:
there is not a per-trial timeout but hyperopt-sklearn implements its own solution by just wrapping the function. Please look for "fn_with_timeout" at https://github.com/hyperopt/hyperopt-sklearn/ .
from issue 210: "the optimizers are stateless, and fmin stores all state of the experiment in the trials object. So if you remove some experiments from the trials object, it's as if they never happened. use fmin's "max_evals" parameter to interrupt search as often as you need to make these sorts of modifications. It should be fine to use repeated calls with e.g. max_evals increasing by 1 every time if you want really fine grained control."
Thanks for looking into this, #doxav. I've written some code that addresses question 1, taking part of fn_with_timeout from hyperopt-sklearn and adapting it for standard Hyperopt cost functions.
You can find it here:
https://gist.github.com/hunse/247d91d14aaa8f32b24533767353e35d

How can I (reasonably) precisely perform an action every N milliseconds?

I have a machine which uses an NTP client to sync up to internet time so it's system clock should be fairly accurate.
I've got an application which I'm developing which logs data in real time, processes it and then passes it on. What I'd like to do now is output that data every N milliseconds aligned with the system clock. So for example if I wanted to do 20ms intervals, my oututs ought to be something like this:
13:15:05:000
13:15:05:020
13:15:05:040
13:15:05:060
I've seen suggestions for using the stopwatch class, but that only measures time spans as opposed to looking for specific time stamps. The code to do this is running in it's own thread, so should be a problem if I need to do some relatively blocking calls.
Any suggestions on how to achieve this to a reasonable (close to or better than 1ms precision would be nice) would be very gratefully received.
Don't know how well it plays with C++/CLR but you probably want to look at multimedia timers,
Windows isn't really real-time but this is as close as it gets
You can get a pretty accurate time stamp out of timeGetTime() when you reduce the time period. You'll just need some work to get its return value converted to a clock time. This sample C# code shows the approach:
using System;
using System.Runtime.InteropServices;
class Program {
static void Main(string[] args) {
timeBeginPeriod(1);
uint tick0 = timeGetTime();
var startDate = DateTime.Now;
uint tick1 = tick0;
for (int ix = 0; ix < 20; ++ix) {
uint tick2 = 0;
do { // Burn 20 msec
tick2 = timeGetTime();
} while (tick2 - tick1 < 20);
var currDate = startDate.Add(new TimeSpan((tick2 - tick0) * 10000));
Console.WriteLine(currDate.ToString("HH:mm:ss:ffff"));
tick1 = tick2;
}
timeEndPeriod(1);
Console.ReadLine();
}
[DllImport("winmm.dll")]
private static extern int timeBeginPeriod(int period);
[DllImport("winmm.dll")]
private static extern int timeEndPeriod(int period);
[DllImport("winmm.dll")]
private static extern uint timeGetTime();
}
On second thought, this is just measurement. To get an action performed periodically, you'll have to use timeSetEvent(). As long as you use timeBeginPeriod(), you can get the callback period pretty close to 1 msec. One nicety is that it will automatically compensate when the previous callback was late for any reason.
Your best bet is using inline assembly and writing this chunk of code as a device driver.
That way:
You have control over instruction count
Your application will have execution priority
Ultimately you can't guarantee what you want because the operating system has to honour requests from other processes to run, meaning that something else can always be busy at exactly the moment that you want your process to be running. But you can improve matters using timeBeginPeriod to make it more likely that your process can be switched to in a timely manner, and perhaps being cunning with how you wait between iterations - eg. sleeping for most but not all of the time and then using a busy-loop for the remainder.
Try doing this in two threads. In one thread, use something like this to query a high-precision timer in a loop. When you detect a timestamp that aligns to (or is reasonably close to) a 20ms boundary, send a signal to your log output thread along with the timestamp to use. Your log output thread would simply wait for a signal, then grab the passed-in timestamp and output whatever is needed. Keeping the two in separate threads will make sure that your log output thread doesn't interfere with the timer (this is essentially emulating a hardware timer interrupt, which would be the way I would do it on an embedded platform).
CreateWaitableTimer/SetWaitableTimer and a high-priority thread should be accurate to about 1ms. I don't know why the millisecond field in your example output has four digits, the max value is 999 (since 1000 ms = 1 second).
Since as you said, this doesn't have to be perfect, there are some thing that can be done.
As far as I know, there doesn't exist a timer that syncs with a specific time. So you will have to compute your next time and schedule the timer for that specific time. If your timer only has delta support, then that is easily computed but adds more error since the you could easily be kicked off the CPU between the time you compute your delta and the time the timer is entered into the kernel.
As already pointed out, Windows is not a real time OS. So you must assume that even if you schedule a timer to got off at ":0010", your code might not even execute until well after that time (for example, ":0540"). As long as you properly handle those issues, things will be "ok".
20ms is approximately the length of a time slice on Windows. There is no way to hit 1ms kind of timings in windows reliably without some sort of RT add on like Intime. In windows proper I think your options are WaitForSingleObject, SleepEx, and a busy loop.

Rudimentary ways to measure execution time of a method

What object/method would I call to get current time in milliseconds (or great precision) to help measure how long a method took to execute?
NSDate's timeIntervalSinceDate will return NSInterval which is measured in seconds. I am looking for something finer grained, something similar to Java's System.currentTimeMillis.
Is there an equivalent version in objective-c/CocoaTouch?
For very fine-grained timings on OS X, I use mach_absolute_time( ), which is defined in <mach/mach_time.h>. You can use it as follows:
#include <mach/mach_time.h>
#include <stdint.h>
static double ticksToNanoseconds = 0.0;
uint64_t startTime = mach_absolute_time( );
// Do some stuff you want to time here
uint64_t endTime = mach_absolute_time( );
// Elapsed time in mach time units
uint64_t elapsedTime = endTime - startTime;
// The first time we get here, ask the system
// how to convert mach time units to nanoseconds
if (0.0 == ticksToNanoseconds) {
mach_timebase_info_data_t timebase;
// to be completely pedantic, check the return code of this next call.
mach_timebase_info(&timebase);
ticksToNanoseconds = (double)timebase.numer / timebase.denom;
}
double elapsedTimeInNanoseconds = elapsedTime * ticksToNanoseconds;
Actually, +[NSDate timeIntervalSinceReferenceDate] returns an NSTimeInterval, which is a typedef for a double. The docs say
NSTimeInterval is always specified in seconds; it yields sub-millisecond precision over a range of 10,000 years.
So it's safe to use for millisecond-precision timing. I do so all the time.
Do not use NSDate for this. You're loosing a lot of precision to call methods and instantiate objects, maybe even releasing something internal. You just don't have enough control.
Use either time.h or as Stephen Canon suggested mach/mach_time.h. They are both much more accurate.
The best way to do this is to fire up Instruments or Shark, attach them to your process (works even if it's already running) and let them measure the time a method takes.
After you're familiar with it this takes even less time than any put-in-mach-time-functions-and-recompile-the-whole-application solution. You even get a lot of information extra. I wouldn't settle for anything less.
timeIntervalSinceReferenceDate is perfectly fine.
However, unless it's a long-running method, this won't bear much fruit. Execution times can vary wildly when you're talking about a few millisecond executions. If your thread/process gets preempted mid-way through, you'll have non-deterministic spikes. Essentially, your sample size is too small. Either use a profiler or run 100,000 iterations to get total time and divide by 100,000 to get average run-time.
If you're trying to tune your code's performance, you would do better to use Instruments or Shark to get an overall picture of where your app is spending its time.
I will repost my answer from another post here. Note that my admittedly simple solution to this complex problem uses NSDate and NSTimeInterval as its foundation:
I know this is an old one but even I found myself wandering past it again, so I thought I'd submit my own option here.
Best bet is to check out my blog post on this:
Timing things in Objective-C: A stopwatch
Basically, I wrote a class that does stop watching in a very basic way but is encapsulated so that you only need to do the following:
[MMStopwatchARC start:#"My Timer"];
// your work here ...
[MMStopwatchARC stop:#"My Timer"];
And you end up with:
MyApp[4090:15203] -> Stopwatch: [My Timer] runtime: [0.029]
in the log...
Again, check out my post for a little more or download it here:
MMStopwatch.zip
#bladnman I love your stopwatch thing.. I use it all the time.. Here's a little block I wrote that eliminates the need for the closing call, and makes it even EASIER (if that even seemed possible) to use, lol.
+(void)stopwatch:(NSString*)name timing:(void(^)())block {
[MMStopwatch start:name];
block();
[MMStopwatch stop: name];
}
then you can just call it wherever..
[MMStopwatch stopwatch:#"slowAssFunction" timing:^{
NSLog(#"%#",#"someLongAssFunction");
}];
↪someLongAssFunction
-> Stopwatch: [slowAssFunction] runtime:[0.054435]
You should post that sucker to github - so people can find it easily / contribute, etc. it's great. thanks.