This script shows that the timestamp that redis returns seems to jump forward a lot and then go backwards from time to time. This happens regularly throughout the run, on every run. What's going on? Is this documented anywhere? I stumbled on this as it messes up my sliding window rate limiter.
res_0, res_1 = 0, 0
for _ in range(200):
script = f"""
local time = redis.call("TIME")
local current_time = time[1] .. "." .. time[2]
return current_time
"""
res_0 = res_1
res_1 = float(await redis.eval(script, numkeys=0))
print(res_1 - res_0)
time.sleep(0.01)
1667745169.747809
0.011765003204345703
0.01197195053100586
0.011564016342163086
0.011634111404418945
0.012428998947143555
0.011847972869873047
0.011600971221923828
0.011788129806518555
0.012033939361572266
0.012130022048950195
0.01160883903503418
0.011954069137573242
0.012022972106933594
0.011958122253417969
0.011713981628417969
0.011844873428344727
0.012138128280639648
0.011618852615356445
0.011570215225219727
0.011890888214111328
0.011478900909423828
0.7928261756896973
-0.5926899909973145
0.11812996864318848
0.11584997177124023
0.12353992462158203
0.1199800968170166
0.11719989776611328
0.12331008911132812
-0.8117339611053467
0.011723995208740234
0.01131582260131836
The most likely reason for this behavior is floating point arithmetic on the calling code side: parsing floats would inevitably round (not sure about the extent, since you didn't tell what platform you are coding against) the input value and the original result precision is lost. So, I would suggest to review your logic so that you process the two components of the result returned by TIME independently using a couple of integers / longs instead.
In addition to that, apart from the obvious possibility of an issue with the clock of the Redis server, there may also be the chance you are contacting different Redis hosts along with each iteration - this may hold true in the event you are using a multi-node Redis topology (replication or cluster).
Related
I am using when command in tcl file, and after the condition is met I want to wait for some microseconds. I have found after, but the delay we specify for after is in milliseconds; it is not taking decimal values.
So is there any other way to add short delay in tcl file?
There's no native operation for that. If it is critical, you could busy-loop looking at clock microseconds…
proc microsleep {micros} {
set expiry [expr {$micros + [clock microseconds]}]
while {[clock microseconds] < $expiry} {}
}
I don't really recommend doing this as it is not energy efficient; such high precision waiting is rarely required in my experience (unless you're working on an embedded system with realtime requirements, an area where Tcl isn't a perfect fit).
Of course, you can also make a C wrapper round a system call like nanosleep(), and that might or might not be a better choice (and might or might not be more efficient)…
I'm working on a 2D video game framework, and I've never written a game loop before. Most frameworks I've ever looked in to seem to implement both a draw and update methods.
For my project I implemented a loop that calls these 2 methods. I noticed with other frameworks, these methods don't always get called alternating. Some frameworks will have update run way more than draw does. Also, most of these types of frameworks will run at 60FPS. I figure I'll need some sort of sleep in here.
My question is, what is the best method for implementing this type of loop? Do I call draw then update, or vice versa? In my case, I'm writing a wrapper around SDL2, so maybe that library requires something to be setup in a certain way?
Here's some "pseudo" code I'm thinking of for the implementation.
loop do
clear_screen
draw
update
sleep(16.milliseconds)
break if window_is_closed
end
Though my project is being written in Crystal-Lang, I'm more looking for a general concept that could be applied to any language.
It depends what you want to achieve. Some games prefer the game logic to run more frequently than the frame rate (I believe Source games do this), for some games you may want the game logic to run less frequently (the only example of this I can think of is the servers of some multiplayer games, quite famously Overwatch).
It's important to consider as well that this is a question of resolution, not speed. A game with logic rate 120 and frame rate 60 is not necessarily running at x2 speed, any time critical operations within the game logic should be done relative to the clock*, not the tic rate, or your game will literally go into slow motion if the frames take too long to render.
I would recommend writing a loop like this:
loop do
time_until_update = (update_interval + time_of_last_update) - current_time
time_until_draw = (draw_interval + time_of_last_draw) - current_time
work_done = false
# Update the game if it's been enough time
if time_until_update <= 0
update
time_of_last_update = current_time
work_done = true
end
# Draw the screen if it's been enough time
if time_until_draw <= 0
clear_screen
draw
time_of_last_draw = current_time
work_done = true
end
# Nothing to do, sleep for the smallest period
if work_done == false
smaller = time_until_update
if time_until_draw < smaller
smaller = time_until_draw
end
sleep_for(smaller)
end
# Leave, maybe
break if window_is_closed
end
You don't want to wait for 16ms every frame otherwise you might end up over-waiting if the frame takes a non-trivial amount of time to complete. The work_done variable is so that we know whether or not the intervals we calculated at the start of the loop are still valid, we may have done 5ms of work, which would throw our sleeping completely off so in that scenario we go back around and calculate fresh values.
* You may want to abstractify the clock, using the clock directly can have some weird effects, for example if you save the game and you save the last time you used a magical power as a clock time, it will instantly come off cooldown when you load the save, as that is now minutes, hours or even days in the past. Similar issues exist with the process being suspended by the operating system.
How can I parameterise one signal based on another signal?
e.g. Suppose I wanted to modify the fps based on the x-position of the mouse. The types are:
Mouse.x : Signal Int
fps : number -> Signal Time
How could I make Elm understand something along the lines of this pseudocode:
fps (Mouse.x) : Signal Time
Obviously, lift doesn't work in this case. I think the result would be Signal (Signal Time) (but I'm still quite new to Elm).
Thanks!
Preamble
fps Mouse.x
Results in a type-error that fps requires an Int, not a Signal Int.
lift fps Mouse.x : Signal (Signal Int)
You are correct there. As CheatX's answers mentions, you cannot using these "nested signals" in Elm.
Answer to your question
It seems like you're asking for something that doesn't exist yet in the Standard Libraries. If I understand your question correctly you would like a time (or fps) signal of which the timing can be changed dynamically. Something like:
dynamicFps : Signal Int -> Signal Time
Using the built-in functions like lift does not give you the ability to construct such a function yourself from a function of type Int -> Signal Time.
I think you have three options here:
Ask to have this function added to the Time library on the mailing-list. (The feature request instructions are a little bloated for a request of such a function so you can skip stuff that's not applicable)
Work around the problem, either from within Elm or in JavaScript, using Ports to connect to Elm.
Find a way to not need a dynamically changing Time signal.
I advise option 1. Option 3 is sad, you should be able to what you asked in Elm. Option 2 is perhaps not a good idea if you're new to Elm. Option 1 is not a lot of work, and the folks on the mailing-list don't bite ;)
To elaborate on option 2, should you want to go for that:
If you specify an outgoing port for Signal Int and an incoming port for Signal Time you can write your own dynamic time function in JavaScript. See http://elm-lang.org/learn/Ports.elm
If you want to do this from within Elm, it'll take an uglier hack:
dynamicFps frames =
let start = (0,0)
time = every millisecond -- this strains your program enormously
input = (,) <~ frames ~ time
step (frameTime,now) (oldDelta,old) =
let delta = now - old
in if (oldDelta,old) == (0,0)
then (frameTime,now) -- this is to skip the (0,0) start
else if delta * frameTime >= second
then (delta,now)
else (0,old)
in dropIf ((==) 0) 0 <| fst <~ foldp step start input
Basically, you remember an absolute timestamp, ask for the new time as fast as you can, and check if the time between the remembered time and now is big enough to fit the timeframe you want. If so you send out that time delta (fps gives time deltas) and remember now as the new timestamp. Because foldp sends out everything it is to remember, you get both the new delta and the new time. So using fst <~ you keep only the delta. But the input time is (likely) much faster than the timeframe you want so you also get a lot of (0,old) from foldp. That's why there is a dropIf ((==) 0).
Nested signals are explicitly forbidden by the Elm's type system [part 3.2 of this paper].
As far as I understand FRP, nested signals are only useful when some kind of flattering provided (monadic 'join' function for example). And that operation is hard to be implemented without keeping an entire signal history.
I have a machine which uses an NTP client to sync up to internet time so it's system clock should be fairly accurate.
I've got an application which I'm developing which logs data in real time, processes it and then passes it on. What I'd like to do now is output that data every N milliseconds aligned with the system clock. So for example if I wanted to do 20ms intervals, my oututs ought to be something like this:
13:15:05:000
13:15:05:020
13:15:05:040
13:15:05:060
I've seen suggestions for using the stopwatch class, but that only measures time spans as opposed to looking for specific time stamps. The code to do this is running in it's own thread, so should be a problem if I need to do some relatively blocking calls.
Any suggestions on how to achieve this to a reasonable (close to or better than 1ms precision would be nice) would be very gratefully received.
Don't know how well it plays with C++/CLR but you probably want to look at multimedia timers,
Windows isn't really real-time but this is as close as it gets
You can get a pretty accurate time stamp out of timeGetTime() when you reduce the time period. You'll just need some work to get its return value converted to a clock time. This sample C# code shows the approach:
using System;
using System.Runtime.InteropServices;
class Program {
static void Main(string[] args) {
timeBeginPeriod(1);
uint tick0 = timeGetTime();
var startDate = DateTime.Now;
uint tick1 = tick0;
for (int ix = 0; ix < 20; ++ix) {
uint tick2 = 0;
do { // Burn 20 msec
tick2 = timeGetTime();
} while (tick2 - tick1 < 20);
var currDate = startDate.Add(new TimeSpan((tick2 - tick0) * 10000));
Console.WriteLine(currDate.ToString("HH:mm:ss:ffff"));
tick1 = tick2;
}
timeEndPeriod(1);
Console.ReadLine();
}
[DllImport("winmm.dll")]
private static extern int timeBeginPeriod(int period);
[DllImport("winmm.dll")]
private static extern int timeEndPeriod(int period);
[DllImport("winmm.dll")]
private static extern uint timeGetTime();
}
On second thought, this is just measurement. To get an action performed periodically, you'll have to use timeSetEvent(). As long as you use timeBeginPeriod(), you can get the callback period pretty close to 1 msec. One nicety is that it will automatically compensate when the previous callback was late for any reason.
Your best bet is using inline assembly and writing this chunk of code as a device driver.
That way:
You have control over instruction count
Your application will have execution priority
Ultimately you can't guarantee what you want because the operating system has to honour requests from other processes to run, meaning that something else can always be busy at exactly the moment that you want your process to be running. But you can improve matters using timeBeginPeriod to make it more likely that your process can be switched to in a timely manner, and perhaps being cunning with how you wait between iterations - eg. sleeping for most but not all of the time and then using a busy-loop for the remainder.
Try doing this in two threads. In one thread, use something like this to query a high-precision timer in a loop. When you detect a timestamp that aligns to (or is reasonably close to) a 20ms boundary, send a signal to your log output thread along with the timestamp to use. Your log output thread would simply wait for a signal, then grab the passed-in timestamp and output whatever is needed. Keeping the two in separate threads will make sure that your log output thread doesn't interfere with the timer (this is essentially emulating a hardware timer interrupt, which would be the way I would do it on an embedded platform).
CreateWaitableTimer/SetWaitableTimer and a high-priority thread should be accurate to about 1ms. I don't know why the millisecond field in your example output has four digits, the max value is 999 (since 1000 ms = 1 second).
Since as you said, this doesn't have to be perfect, there are some thing that can be done.
As far as I know, there doesn't exist a timer that syncs with a specific time. So you will have to compute your next time and schedule the timer for that specific time. If your timer only has delta support, then that is easily computed but adds more error since the you could easily be kicked off the CPU between the time you compute your delta and the time the timer is entered into the kernel.
As already pointed out, Windows is not a real time OS. So you must assume that even if you schedule a timer to got off at ":0010", your code might not even execute until well after that time (for example, ":0540"). As long as you properly handle those issues, things will be "ok".
20ms is approximately the length of a time slice on Windows. There is no way to hit 1ms kind of timings in windows reliably without some sort of RT add on like Intime. In windows proper I think your options are WaitForSingleObject, SleepEx, and a busy loop.
Noted that the parameter of taskDelay is of type int, which means the number could be negative. Just wondering how the function is going to react when passing a negative number.
Most functions would validate the input, and just return early/return 0/set the parameter in question to a default value.
I presume there's no critical need to do this in production, and you probably have some code lying around that you could test with.... why not give it a go?
The documentation doesn't address it, and the only error codes they do define doesn't cover this case. The most correct answer therefore is that the results are undefined.
See the VxWorks / Tornado II FAQ for this gem, however:
taskDelay(-1) shows another bug in
the vxWorks timer/tick code. It has
the (side) effect of setting vxTicks
to zero. This corrupts the localtime
(and probably other things). In fact
taskDelay(x) will have the same effect
if vxTicks + x >= 0x100000000. If the
system clock rate is 100Hz this
happens after about 500 days (because
vxTicks wraps). At faster clock rates
it will happen sooner. Anyone trying
for several years uptime?
Oh there is an undocumented upper
limit on the clock rate. At rates
above 4294 select() will fail to
convert its 'usec' time into the
correct number of ticks. (From: David
Laight, dsl#tadpole.co.uk)
Assuming this bug is old, I would hope that it would either return an error or do the same thing as taskDelay(0), which puts your task at the end of the ready queue.
The task delay tick will be VIRTUALLY 10,9,..,1,0 for taskDelay(10).
The task delay tick will be VIRTUALLY -10,-11,...,-2147483648,2147483647,...,1,0 for taskDelay(-10).