How to step one frame forward and one frame backward in video playback? - html5-video

I need to search for some special features/patterns that might be visible in only one (or two) of many frames. The frame rate can be as slow as 25 frames per second and the video may contain over 7500 frames. I often start by scanning the video at high speed looking for a segment where I might find the feature, then rewind. I repeat this procedure while gradually reducing the playback speed, until I find a fairly small time window in which I can expect to find the feature (if it is present). I would then like to step forward and backward by single frames using key hit events (e.g. right arrow and left arrow keys) to find the feature of interest. I have managed to use HTML5 with JavaScript to control the forward speed; but, still do not know how to use the keyboard for single frame stepping forward and backward through a video. How can this be accomplished? Note, my web browser is Firefox 26 running on a Windows 7 platform.

You can seek to any time in the video by setting the currentTime property. Something like this:
var video = document.getElementById('video'),
frameTime = 1 / 25; //assume 25 fps
window.addEventListener('keypress', function (evt) {
if (video.paused) { //or you can force it to pause here
if (evt.keyCode === 37) { //left arrow
//one frame back
video.currentTime = Math.max(0, video.currentTime - frameTime);
} else if (evt.keyCode === 39) { //right arrow
//one frame forward
//Don't go past the end, otherwise you may get an error
video.currentTime = Math.min(video.duration, video.currentTime + frameTime);
}
}
});
Just a couple things you need to be aware of, though they shouldn't cause you too much trouble:
There is no way to detect the frame rate, so you have to either hard-code it or list it in some lookup table or guess.
Seeking may take a few milliseconds or more and does not happen synchronously. The browser needs some time to load the video from the network (if it's not already loaded) and to decode the frame. If you need to know when seeking is done, you can listen for the 'seeked' event.

You can check to see if the video has advanced to the next frame by checking the
targetTime += (1 / 25) // 25 fps
video.currentTime = targetTime // set the video frame
if (video.currentTime >= video.duration) { // if it's the end of the video
alert('done!');
}
if (video.currentTime == targetTime) { // frame has been updated
}

Related

Tile Collision In GML

I am making a game based on the game AZ on the website Y8, and I am having problems with tile collisions.
the player moves basically by giving it speed when up is pressed, then rotating left or right.
direction = image_angle;
if(keyForward)
{
speed = 2;
}
else speed = 0;
// rotate
if(keyRotateLeft)
{
image_angle = image_angle + 5;
}
if(keyRotateRight)
{
image_angle = image_angle - 5;
}
then I said when the player collides with the tile speed = 0. But the player gets stuck and can't move anymore. is there a better way to do this.
A simple approach would be as following:
Attempt to rotate
Check if you are now stuck in a wall
If you are, undo the rotation.
A more advanced approach would be to attempt pushing the player out of solids while rotating.
Alternatively, you may be able to get away with giving the player a circular mask and not rotating the actual mask (using a user-defined variable instead of image_angle).

NAudio: How to accurately get the current play location when changing play position using AudioFileReader and WaveOutEvent

I'm creating an application that needs to allow the user to select the play position of an audio file while the file is playing. However, once the position is changed, I'm having trouble identifying the current position.
Here's an example program that shows how I'm going about it.
using NAudio.Utils;
using NAudio.Wave;
using System;
namespace NAudioTest
{
class Program
{
static void Main()
{
var audioFileReader = new AudioFileReader("test.mp3");
var waveOutEvent = new WaveOutEvent();
waveOutEvent.Init(audioFileReader);
waveOutEvent.Play();
while (true)
{
var key = Console.ReadKey(true);
if (key.Key == ConsoleKey.Enter)
{
var playLocationSeconds =
waveOutEvent.GetPositionTimeSpan().TotalSeconds;
Console.WriteLine(
"Play location is " + playLocationSeconds + "seconds");
}
else if (key.Key == ConsoleKey.RightArrow)
{
audioFileReader.CurrentTime =
audioFileReader.CurrentTime.Add(TimeSpan.FromSeconds(30));
}
else
{
break;
}
}
}
}
}
Steps to reproduce the problem
Start the program: the audio file starts playing
Press the enter key: the current play time is written to the console
Press the right arrow key: the played audio jumps ahead (presumably) to the expected location
Press the Enter key again: a play time is written to the
console, but it looks to be the amount of time since the audio first
started playing, not the time of the current play position.
I have tried getting the value of AudioFileReader.CurrentTime instead of calling GetPositionTimeSpan on the WaveOutEvent. The problem with this approach is that the AudioFileReader.CurrentTime value proceeds in jumps, presumably because this underlying stream is buffered when used with WaveOutEvent so CurrentTime does not accurately reflect the play position, only the position in the underlying stream.
How do I support arbitrary play positioning yet continue to get an accurate play position current time?
The "CurrentTime" property of your audio file reader is good enough to tell the current position of playback, especially if your latency is not very high. I found the difference between it and waveOutEvent.GetPositionTimeSpan() to be 100-200 ms. at most.
You are indeed using the setter of the CurrentTime property to reposition within the stream. It would be consistent to use the getter to then query the current position as well. If you are concerned with precision, you can use lower latency.
The extension method "GetPositionTimeSpan()" does seem to return the total length of playback so far and not the position within the stream. Admittedly I do not know why this is so.

Web audio API polyphony - using 2 different gain nodes not working?

I can't seem to create two oscillators with independent gain envelopes.
The code below creates two buttons which each play a sine tone at a different pitch. When I click on the first button, I hear the tone grow in volume as it should. But, when I click the second button, the tone reacts as if it is connected to the gain of the first tone. For example, if I click the second button (turning on the second tone) while the first tone is at volume 1, the second tone will enter at volume 1, even though it is supposed to envelope from 0 to 1 to 0 over the course of 10 seconds.
Can I only have one gain node per audio context? Or is there some other reason that the gains of these oscillators are being connected? In addition, after I play the tones once, I cannot play them again, which makes me especially think that I am doing something wrong. : )
Thanks. A link is below and the code is below that. This is my first post here so let me know if you need anything else. This code must be run in versions of Chrome or Safari that support the web audio api.
http://whitechord.org/just_mod/poly_test.html
WAAPI tests
<button onclick="play()">play one</button>
<button onclick="play2()">play two</button>
<script>
var context;
window.addEventListener('load', initAudio, false);
function initAudio() {
try {
context = new webkitAudioContext();
} catch(e) {
onError(e);
}
}
function play() {
var oscillator = context.createOscillator();
var gainNode = context.createGainNode();
gainNode.gain.value = 0.0;
oscillator.connect(gainNode);
gainNode.connect(context.destination);
oscillator.frequency.value = 700;
gainNode.gain.linearRampToValueAtTime(0.0, 0); // envelope
gainNode.gain.linearRampToValueAtTime(0.1, 5); // envelope
gainNode.gain.linearRampToValueAtTime(0.0, 10); // envelope
oscillator.noteOn(0);
}
function play2() {
var oscillator2 = context.createOscillator();
var gainNode2 = context.createGainNode();
gainNode2.gain.value = 0.0;
oscillator2.connect(gainNode2);
gainNode2.connect(context.destination);
oscillator2.frequency.value = 400;
gainNode2.gain.linearRampToValueAtTime(0.0, 0); // envelope
gainNode2.gain.linearRampToValueAtTime(0.1, 5); // envelope
gainNode2.gain.linearRampToValueAtTime(0.0, 10); // envelope
oscillator2.noteOn(0);
}
/* error */
function onError(e) {
alert(e);
}
</script>
</body>
</html>
Can I only have one gain node per audio context? Or is there some other reason that the gains of these oscillators are being connected? In addition, after I play the tones once, I cannot play them again, which makes me especially think that I am doing something wrong. : )
You can have as many gain nodes as you want (this is how you could achieve mixing bus-like setups, for example), so that's not the problem. Your problem is the following:
Remember that the second parameter to linearRampToValueAtTime() is time in the same time coordinate system as your context.currentTime.
And your context.currentTime is always moving forward in real time, so all your ramps, curves, etc. should be calculated relative to it.
If you want something to happen 4 seconds from now, you'd pass context.currentTime + 4 to the Web Audio API function.
So, change all your calls linearRampToValueAtTime() in your code, so that they look like:
gainNode2.gain.linearRampToValueAtTime(0.0, context.currentTime); // envelope
gainNode2.gain.linearRampToValueAtTime(0.1, context.currentTime + 5); // envelope
gainNode2.gain.linearRampToValueAtTime(0.0, context.currentTime + 10); // envelope
And that should take care of your issues.
BTW you have a stray double quote in your BODY opening markup tag.
Ask Will Conklin
gainNode2.gain.linearRampToValueAtTime(0.1, context.currentTime + 5); // envelope

UIScrollView lazy loading of images to reduce memory usage and avoid crash

My app, using scrollview that loads multiple images with NSOperation (Max around 100sh). I tried to test it out on my ipod 2Gen and it crashes due to low memory on device, but works fine on ipod 4th Gen. On 2nd Gen, it crashes when it loads about 15-20 images. How should I handle this problem ?
You could load you images lazily. That means, e.g., just a couple of images at a time in your scroll view, so that you can animate to the next and the previous one; when you move to the right, e.g., you also load one more image; at the same time, you unload images that are not directly accessible anymore (e.g. those that have remained to the left).
You should make the number of preloaded image sufficiently high so that the user can scroll without waiting at any time; this also depends on how big those images are and where they come from (i.e., how long it takes to load them)... a good starting point would be, IMO, 5 images loaded at any time.
Here you will find a nice step by step tutorial.
EDIT:
Since the link above seems to be broken, here is the final code from that post:
-(void)scrollViewDidScroll:(UIScrollView *)myScrollView {
/**
* calculate the current page that is shown
* you can also use myScrollview.frame.size.height if your image is the exact size of your scrollview
*/
int currentPage = (myScrollView.contentOffset.y / currentImageSize.height);
// display the image and maybe +/-1 for a smoother scrolling
// but be sure to check if the image already exists, you can do this very easily using tags
if ( [myScrollView viewWithTag:(currentPage +1)] ) {
return;
}
else {
// view is missing, create it and set its tag to currentPage+1
}
/**
* using your paging numbers as tag, you can also clean the UIScrollView
* from no longer needed views to get your memory back
* remove all image views except -1 and +1 of the currently drawn page
*/
for ( int i = 0; i < currentPages; i++ ) {
if ( (i < (currentPage-1) || i > (currentPage+1)) && [myScrollView viewWithTag:(i+1)] ) {
[[myScrollView viewWithTag:(i+1)] removeFromSuperview];
}
}
}

Determine actual frame rate of a stream using QTMovie

I am using QTMovie with QTMovieOpenForPlaybackAttribute:YES, and using a QTMovieView to display it. I need to calculate the framerate it is achieving.
One way I can think of doing this is to have a callback which is called every time a frame is about to display or is ready to be displayed - is anyone familiar with such a callback?
Another way would be to have a timer which uses -currentFrameImage and compares it with the last frame image it tested - however firstly I don't know how you would go about comparing two NSImages, and secondly I would imagine this would be problematic if two sequential frames were the same, it would effectively assume a frame was dropped when it was not
The last way I can think of would be to again use a timer, this time to call -currentTime. I have tried this, however, for some reason, the timeScale is set to 1000000000. I read that the time scale is supposed to be 100*fps, so, why is currentTime returning that the FPS is 10000000? This seems completely incorrect. There are no flags set in the QTTime returned.
I have searched everywhere for information on this - any searches to do with frame rate only lead me to how to set a frame rate on capture which is not what I am looking for.
Try this:
- (double)frameRate
{
double result = 0;
for (QTTrack* track in [_movie tracks])
{
QTMedia* trackMedia = [track media];
if ([trackMedia hasCharacteristic:QTMediaCharacteristicHasVideoFrameRate])
{
QTTime mediaDuration = [(NSValue*)[trackMedia attributeForKey:QTMediaDurationAttribute] QTTimeValue];
long long mediaDurationScaleValue = mediaDuration.timeScale;
long mediaDurationTimeValue = mediaDuration.timeValue;
long mediaSampleCount = [(NSNumber*)[trackMedia attributeForKey:QTMediaSampleCountAttribute] longValue];
result = (double)mediaSampleCount * ((double)mediaDurationScaleValue / (double)mediaDurationTimeValue);
break;
}
}
return result;
}