Repeating a short sound very fast with CreateJS - createjs

Goal
I am trying to create a fast ticking sound in a Cordova app using Createjs.
The ticking sound speed changes based on user settings. At the moment the timing is erratic
Setup
I have an mp3 audio file of a single tick sound that is 50ms long.
A target speed of repetition could be as fast as 10 times per second.
Question
How can I get the sound to play evenly and consistently at that speed?
More Technical Detail
createjs.Ticker.timingMode = createjs.Ticker.RAF_SYNCHED;
createjs.Ticker.framerate = 30;
Cheers for any help

This should be pretty straightforward. I set up a quick fiddle to play a sound a specific amount of times per second. It seems pretty reliable, even when playing at 60fps.
https://jsfiddle.net/lannymcnie/ghjejvq9/
The approach is to just check every Ticker.tick if the amount of time has passed since the last tick sound. The duration is derived by 1000/ticksPerSecond.
// Every tick
var d = new Date().getTime();
if (d > lastTick + 1000/ticksPerSecond) {
createjs.Sound.play("tick");
lastTick = d;
}

Related

Embedded System - Polling

I have about 6 sensors (GPS, IMU, etc.) that I need to constantly collect data from. For my purposes, I need a reading from each (within a small time frame) to have a complete data packet. Right now I am using interrupts, but this results in more data from certain sensors than others, and, as mentioned, I need to have the data matched up.
Would it be better to move to a polling-based system in which I could poll each sensor in a set order? This way I could have data from each sensor every 'cycle'.
I am, however, worried about the speed of polling because this system needs to operate close to real time.
Polling combined with a "master timer interrupt" could be your friend here. Let's say that your "slowest" sensor can provide data on 20ms intervals, and that the others can be read faster. That's 50 updates per second. If that's close enough to real-time (probably is close for an IMU), perhaps you proceed like this:
Set up a 20ms timer.
When the timer goes off, set a flag inside an interrupt service routine:
volatile uint8_t timerFlag = 0;
ISR(TIMER_ISR_whatever)
{
timerFlag = 1; // nothing but a semaphore for later...
}
Then, in your main loop act when timerFlag says it's time:
while(1)
{
if(timerFlag == 1)
{
<read first device>
<read second device>
<you get the idea ;) >
timerflag = 0;
}
}
In this way you can read each device and keep their readings synched up. This is a typical way to solve this problem in the embedded space. Now, if you need data faster than 20ms, then you shorten the timer, etc. The big question, as it always is in situations like this, is "how fast can you poll" vs. "how fast do you need to poll." Only experimentation and knowing the characteristics and timing of your various devices can tell you that. But what I propose is a general solution when all the timings "fit."
EDIT, A DIFFERENT APPROACH
A more interrupt-based example:
volatile uint8_t device1Read = 0;
volatile uint8_t device2Read = 0;
etc...
ISR(device 1)
{
<read device>
device1Read = 1;
}
ISR(device 2)
{
<read device>
device2Read = 1;
}
etc...
// main loop
while(1)
{
if(device1Read == 1 && device2Read == 1 && etc...)
{
//< do something with your "packet" of data>
device1Read = 0;
device2Read = 0;
etc...
}
}
In this example, all your devices can be interrupt-driven but the main-loop processing is still governed, paced, by the cadence of the slowest interrupt. The latest complete reading from each device, regardless of speed or latency, can be used. Is this pattern closer to what you had in mind?
Polling is a pretty good and easy to implement idea in case your sensors can provide data practically instantly (in comparison to your desired output frequency). It does get into a nightmare when you have data sources that need a significant (or even variable) time to provide a reading or require an asynchronous "initiate/collect" cycle. You'd have to sort your polling cycles to accommodate the "slowest" data source.
What might be a solution in case you know the average "data conversion rate" of each of your sources, is to set up a number of timers (per data source) that trigger at poll time - data conversion rate and kick in the measurement from those timer ISRs. Then have one last timer that triggers at poll timer + some safety margin that collects all the conversion results.
On the other hand, your apparent problem of "having too many measurements" from the "fast" data sources wouldn't bother me too much as long as you don't have anything reasonable to do with that wasted CPU/sensor load.
A last and easier approach, in case you have some cycles to waste, is: Simply sort the data sources from "slowest" to "fastest" and initiate a measurement in that order, then wait for results in the same order and poll.

Confusion with writing a game loop

I'm working on a 2D video game framework, and I've never written a game loop before. Most frameworks I've ever looked in to seem to implement both a draw and update methods.
For my project I implemented a loop that calls these 2 methods. I noticed with other frameworks, these methods don't always get called alternating. Some frameworks will have update run way more than draw does. Also, most of these types of frameworks will run at 60FPS. I figure I'll need some sort of sleep in here.
My question is, what is the best method for implementing this type of loop? Do I call draw then update, or vice versa? In my case, I'm writing a wrapper around SDL2, so maybe that library requires something to be setup in a certain way?
Here's some "pseudo" code I'm thinking of for the implementation.
loop do
clear_screen
draw
update
sleep(16.milliseconds)
break if window_is_closed
end
Though my project is being written in Crystal-Lang, I'm more looking for a general concept that could be applied to any language.
It depends what you want to achieve. Some games prefer the game logic to run more frequently than the frame rate (I believe Source games do this), for some games you may want the game logic to run less frequently (the only example of this I can think of is the servers of some multiplayer games, quite famously Overwatch).
It's important to consider as well that this is a question of resolution, not speed. A game with logic rate 120 and frame rate 60 is not necessarily running at x2 speed, any time critical operations within the game logic should be done relative to the clock*, not the tic rate, or your game will literally go into slow motion if the frames take too long to render.
I would recommend writing a loop like this:
loop do
time_until_update = (update_interval + time_of_last_update) - current_time
time_until_draw = (draw_interval + time_of_last_draw) - current_time
work_done = false
# Update the game if it's been enough time
if time_until_update <= 0
update
time_of_last_update = current_time
work_done = true
end
# Draw the screen if it's been enough time
if time_until_draw <= 0
clear_screen
draw
time_of_last_draw = current_time
work_done = true
end
# Nothing to do, sleep for the smallest period
if work_done == false
smaller = time_until_update
if time_until_draw < smaller
smaller = time_until_draw
end
sleep_for(smaller)
end
# Leave, maybe
break if window_is_closed
end
You don't want to wait for 16ms every frame otherwise you might end up over-waiting if the frame takes a non-trivial amount of time to complete. The work_done variable is so that we know whether or not the intervals we calculated at the start of the loop are still valid, we may have done 5ms of work, which would throw our sleeping completely off so in that scenario we go back around and calculate fresh values.
* You may want to abstractify the clock, using the clock directly can have some weird effects, for example if you save the game and you save the last time you used a magical power as a clock time, it will instantly come off cooldown when you load the save, as that is now minutes, hours or even days in the past. Similar issues exist with the process being suspended by the operating system.

Display AVplayer playback speed in macOS app

I'm starting from an apple demo app called AVMovieEditor. I'm trying to add a visual display of the playback speed. Without any modification from me AVMovieEditor responds to the J,K & L keys like a video editing application. J plays in reverse, J again plays faster in reverse, K is pause, L plays, L again plays faster, etc...
I've successfully created an observer however every time I check the AVplayer.rate it is either -1,0 or 1, despite playing faster than realtime. Indeed when L is pressed the rate goes from 0 to 1, but then L is pressed a second time and the playback speeds up the rate changes back from 1 to 0? The result is similar with J and reverse playback except the rate changes to -1 and then back to 0.
- (void)addPeriodicTimeObserver {
// about 30 times pr second
CMTime interval = CMTimeMake(33, 1000);
// Queue on which to invoke the callback
dispatch_queue_t mainQueue = dispatch_get_main_queue();
// Add time observer
self.timeObserverToken =
[self.movieViewController.playerView.player addPeriodicTimeObserverForInterval:interval
queue:mainQueue
usingBlock:^(CMTime time) {
NSLog(#"%f %f",
self.movieViewController.playerView.player.rate
// only shows -1,0,1 despite playback faster then 100%
[[self.movieViewController.playerView.player.currentItem.tracks[0] assetTrack] nominalFrameRate]);
// Always shows 29.97
}];
}
After much searching I am stumped as to how I can find a useful playback rate?
I'd like to know the actual frame rate of playback, and then I will make an indicator in my GUI to inform the end user the current playback rate.
Edit: I've found another interesting bit of data, but it doesn't solve my problem...
self.movieViewController.playerView.player.currentItem.tracks[0].currentVideoFrameRate
When the above is observed upon starting playback it ramps up from 0 to 30ish, then when L is pressed a second time it ramps up to 60ish. This would solve my problem but for 2 details. First when L is pressed a third time video playback increases, but the currentVideoFrameRate still hangs around 60ish. Second problem is that currentVideoFrameRate only shows positive values. If J is pressed twice and AVPlayer.rate goes back to 0 from -1, there is no way of knowing the frames are traversing in reverse order!
One thing I have found no information on that I think might shed light on a solution is what entity is capturing the J,K & L keyboard events, and what response is being applied?
Edit2: Apple's quicktime player has the functionality I am looking for. It responds to J,K & L like AVMovieEditor but it displays 2x, 5x, 10x, 30x & 60x respectively for it's playback speed and displays them on the left for reverse playback. Is this something reserved for Apple only or can I have it too!?
Edit3: Based on an answer ( Thanks! ) I've started looking at another property AVplayerItem.timebase.rate However this is giving similarly incorrect results.
I can confirm that visually the players rate increases as expected with each press of the "L" key. I would expect that also the "Requested Rate" ( AVplayer.rate ) I was using previously would also increase since I see it visually represented in playback, but not in the reported "Requested Rate". ( it goes to 0 )
The "Playback Actually Occurring Rate" behaves similarly. Except that when I press a stop key, it still displays a rate of 1 presumably because the player was playing at the time of the request to stop. But also AVplayerItem.timebase.rate reports an "Playback Actually Occurring Rate" of 0 when the playback rate is greater than 1 or less than -1.
This is the code I am currently using to see the rate. It's triggered on an observer that watches for changes in the requested rate. Is there a problem with how I am looking at these?
NSLog(#"RequestedPlaybackRate:%f",myAVPlayer.rate);
NSLog(#"OccuringPlaybackRate:%f",CMTimebaseGetRate(myAVPlayerItem.timebase));
I will clarify also that the "Requested Rate" is actually what I want, not the "Playback Actually Occurring Rate".
Edit4:
Created a github project for easy reproduction of this problem and submitted a bug report with Apple.
https://github.com/markjwill/AVPlayerRateBug
AVPlayer.rate is the app's requested playback rate.
The rate at which playback is actually occurring is found with AVPlayerItem.timebase.rate
This is explained in detail in WWDC 2016 Session, "Advances in AVFoundation Playback"
https://developer.apple.com/videos/play/wwdc2016/503/?time=378

Action Script 3. How to prevent lag bugs issues in flash games?

How to prevent lag bugs issues in flash games? For example If game have countdown timer 1 minute and player have to catch that much items that possible.
Here are following lag bugs issues:
If items moving (don't have static position) - that higher lag player
have, that slower items move;
Timer starting count slowly when player have lags (CPU usage 90-100%).
So for example If player without lags can get 100 points, player with slow / bad computer can get 4-6x more, like 400-600.
I think that because It's on client side, but how to move It to server side? Should I insert (and update) countdown time to database? But how to update It on every millisecond?
And how about items position solution? If player have big lags, items moving very very slowly, so easy to click on that, have you any ideas?
Moving the functionality to the server side doesn't solve the problem.
Now if there are many players connected to the server, the server will lag and give those players more time to react.
To make your logic independent from lag, do not base it on the screen update.
Because this assumes a constant time between screen updates (or frames)
Instead, make your logic based on the actual time that passed between frames.
Use getTimer to measure how much time passed between the current and the last frame.
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/utils/package.html
Of course, your logic should include calculations for what happens in between frames.
In order to mostly fix speed issues on the client you need to make all your speed related code based on actual time, not frames. For example:
Here is a fairly typical example code used to move an object based on frames:
// speed = pixels per frame
var xSpeed:Number = 5;
var ySpeed:Number = 5;
addEventListener(Event.ENTER_FRAME, update);
function update(e:Event):void {
player.x += xSpeed;
player.y += ySpeed;
}
While this code is simple and good enough for a single client, it is very dependent on the frame rate, and as you know the frame rate is very "elastic" and actual frame rate is heavily influenced by the client CPU speed.
Instead, here is an example where the movement is based on actual elapsed time:
// speed = pixels per second
var xSpeed:Number = 5 * stage.frameRate;
var ySpeed:Number = 5 * stage.frameRate;
var lastTime:int = getTimer();
addEventListener(Event.ENTER_FRAME, update);
function update(e:Event):void {
var currentTime:int = getTimer();
var elapsedSeconds:Number = (currentTime - lastTime) / 1000;
player.x += xSpeed * elapsedSeconds;
player.y += ySpeed * elapsedSeconds;
lastTime = currentTime;
}
The crucial part here is that the current time is tracked using getTimer(), and each update moves the player based on the actual elapsed time, not a fixed amount. I set the xSpeed and ySpeed to 5 * stage.frameRate to illustrate how it can be equivelent to the other example, but you don't have to do it that way. The end result is that the second example would have consistent speed of movement regardless of the actual frame rate.

How can I (reasonably) precisely perform an action every N milliseconds?

I have a machine which uses an NTP client to sync up to internet time so it's system clock should be fairly accurate.
I've got an application which I'm developing which logs data in real time, processes it and then passes it on. What I'd like to do now is output that data every N milliseconds aligned with the system clock. So for example if I wanted to do 20ms intervals, my oututs ought to be something like this:
13:15:05:000
13:15:05:020
13:15:05:040
13:15:05:060
I've seen suggestions for using the stopwatch class, but that only measures time spans as opposed to looking for specific time stamps. The code to do this is running in it's own thread, so should be a problem if I need to do some relatively blocking calls.
Any suggestions on how to achieve this to a reasonable (close to or better than 1ms precision would be nice) would be very gratefully received.
Don't know how well it plays with C++/CLR but you probably want to look at multimedia timers,
Windows isn't really real-time but this is as close as it gets
You can get a pretty accurate time stamp out of timeGetTime() when you reduce the time period. You'll just need some work to get its return value converted to a clock time. This sample C# code shows the approach:
using System;
using System.Runtime.InteropServices;
class Program {
static void Main(string[] args) {
timeBeginPeriod(1);
uint tick0 = timeGetTime();
var startDate = DateTime.Now;
uint tick1 = tick0;
for (int ix = 0; ix < 20; ++ix) {
uint tick2 = 0;
do { // Burn 20 msec
tick2 = timeGetTime();
} while (tick2 - tick1 < 20);
var currDate = startDate.Add(new TimeSpan((tick2 - tick0) * 10000));
Console.WriteLine(currDate.ToString("HH:mm:ss:ffff"));
tick1 = tick2;
}
timeEndPeriod(1);
Console.ReadLine();
}
[DllImport("winmm.dll")]
private static extern int timeBeginPeriod(int period);
[DllImport("winmm.dll")]
private static extern int timeEndPeriod(int period);
[DllImport("winmm.dll")]
private static extern uint timeGetTime();
}
On second thought, this is just measurement. To get an action performed periodically, you'll have to use timeSetEvent(). As long as you use timeBeginPeriod(), you can get the callback period pretty close to 1 msec. One nicety is that it will automatically compensate when the previous callback was late for any reason.
Your best bet is using inline assembly and writing this chunk of code as a device driver.
That way:
You have control over instruction count
Your application will have execution priority
Ultimately you can't guarantee what you want because the operating system has to honour requests from other processes to run, meaning that something else can always be busy at exactly the moment that you want your process to be running. But you can improve matters using timeBeginPeriod to make it more likely that your process can be switched to in a timely manner, and perhaps being cunning with how you wait between iterations - eg. sleeping for most but not all of the time and then using a busy-loop for the remainder.
Try doing this in two threads. In one thread, use something like this to query a high-precision timer in a loop. When you detect a timestamp that aligns to (or is reasonably close to) a 20ms boundary, send a signal to your log output thread along with the timestamp to use. Your log output thread would simply wait for a signal, then grab the passed-in timestamp and output whatever is needed. Keeping the two in separate threads will make sure that your log output thread doesn't interfere with the timer (this is essentially emulating a hardware timer interrupt, which would be the way I would do it on an embedded platform).
CreateWaitableTimer/SetWaitableTimer and a high-priority thread should be accurate to about 1ms. I don't know why the millisecond field in your example output has four digits, the max value is 999 (since 1000 ms = 1 second).
Since as you said, this doesn't have to be perfect, there are some thing that can be done.
As far as I know, there doesn't exist a timer that syncs with a specific time. So you will have to compute your next time and schedule the timer for that specific time. If your timer only has delta support, then that is easily computed but adds more error since the you could easily be kicked off the CPU between the time you compute your delta and the time the timer is entered into the kernel.
As already pointed out, Windows is not a real time OS. So you must assume that even if you schedule a timer to got off at ":0010", your code might not even execute until well after that time (for example, ":0540"). As long as you properly handle those issues, things will be "ok".
20ms is approximately the length of a time slice on Windows. There is no way to hit 1ms kind of timings in windows reliably without some sort of RT add on like Intime. In windows proper I think your options are WaitForSingleObject, SleepEx, and a busy loop.