Movie time from QTVisualContext given CVTimeStamp in CAOpenGLLayer rendering method? - core-animation

I'm using the standard CoreVideo Display Link + QTVisualContext to render a QuickTime movie into an NSOpenGLView subclass. I would now like to synchronize a timeline view with movie playback. The timeline view is implemented as a layer hosting view, hosting a CAOpenGLLayer subclass that renders the timeline. I chose this architecture because the CAOpenGLLayer gets a CVTimeStamp for rendering. I thought that I could use this time stamp to get the current movie time from the QTVisualContext.
The only way I've found to get the movie time from a CVTimeStamp is to copy the QTVisualContext into a CVImageBuffer (using QTVisualContextCopyImageForTime) and then retrieve the movie time from the CVImageBuffer's kCVBufferMovieTimeKey attachment. Obviously, this seems like a little overkill since I don't need the image. Furthermore, the documentation for QTVisualContextCopyImageForTime indicates that asking for a time before the a previous call is not allowed. Since I can't guarantee the order of events between the CoreAnimation thread and the CoreVideo display link thread, I've run into a dead end.
It seems that CVDisplayLinkTranslateTime should be able to translate the CVTimeStamp I get in the CAOpenGLLayer rendering method into a movie time and movie time base, but I'm not sure how to relate this (Display Link) time to the QuickTime movie's time. I don't necessarily know when the movie's time 0 is relative to the Display Link movie time.
So, is there a way to get the movie time for a CVTimeStamp directly from the QTVisualContext (or anywhere else in the QTMovie->QTVisualContext->Display Link->...) pathway?

Related

Getting Contact Information with Tiles in Godot

Can I somehow get information that the player has touched a specific tile located in the resource (tilemap.res)?
It is necessary to determine which tile it stands on (for example, on land or in water)
It depends on what kind of behavior you're expecting. There's two major ways to do this:
In each individual script that you want to be affected by the tiles, get the tile it's on every time it moves, then run the logic directly from there.
As above, get the tile the unit is on each time it moves, but instead of just running the logic every time, emit a signal whenever the tile changes, and keep the current tile cached so you can check when it's on a new tile; have the TileMap itself attach to the signal, and when it receives the signal, act on the Node2D until it leaves. This is what I'd do.
The exact method to find the tile is TileMap.world_to_map(TileMap.get_cellv(Node2D.position))--you just need to have the Node2D and the TileMap, and you can get both in the same function using the above two methods.
Technically, you could also procedurally add Area2Ds to the tilemap based on the tiles at various positions using TileMap.get_used_cells_by_id, making sure that the ID is the one that has the special behavior, then attaching the TileMap to those areas' body/area_entered and body/area_exited and using that, but spamming a whole lot of Area2Ds isn't necessary when you can check the tile a Node2D is on directly.

iOS wrong video orientation, BUG?

When I record video, it always records in landscape mode, regardless of real device orientation. How to force UIImagePicker to set the Portrait orientation?
AGAIN, UIImagePicker is used -- not AVFoundation recording classes.
1) The various properties (movie source type, dimensions, duration, etc.) of MPMoviePlayerController are only available after the movie has been visually played to the user. Before that they all come back 0. I've tried various things like forcing the system to wait a few seconds (to see if it was just a timing issue), but so far, nothing has worked other than actually playing the movie. Evan at that point, I believe those properties are acting as read-only; it's not like I can adjust them directly.
2) The various CGImageSourceRef calls and routines work only on actual images, not movies, on iOS. On MacOS there is more support for movies, also going through the CV (for Video) as opposed to CI (Image) or CG (Graphic) routines. At least, all the examples I've found so far work only on MacOS, and nothing I've found shows working code on iOS, which matches my results of getting a nil result when I attempt to use it.

How to hear sound from mpmovieplayer at a specific time using UISlider?

I'm working on an iOS movie editor project. For this editor, i use MPMoviePlayer to show the video file selected by the user.
I use custom controls, and I have a UISlider that enables the user to move the player's currentTime position. When the user touches the slider, movie is paused and its currentTime changes along with the UISlider's value.
Everything works perfectly, but now i need to let the user hear the sound at this currentTime position.
For those who know iMovie, when you move your mouse over a movie event, you see the image and hear the sound at this position, and that's what i'd like in my editor.
I've tried to call player's play method with à NSTimer to stop after 0.2 seconds, but the result is kind of messy.
Has anyone already achieved to do something like this ?
Thanks !
Best regards.
Seeking takes time; that's why you've ended up using a timer. The real problem here is that MPMoviePlayerController, while convenient because it gives you controls, is a blunt instrument; it's just a massive simplified convenience built on top of AVFoundation. But you don't need the built-in controls, so I would suggest throwing away your entire implementation and getting down to the real stuff, using AVFoundation instead (AVPlayer etc). Now you have a coherent way to seek and get a notification when the seek has completed (seekToTime:completionHandler:), so you'll be able to start playing as soon as possible. Plus, AVFoundation is the level where you'll be doing all your "editing" anyway.

How can I programmatically pipe a QuickTime movie into a Quartz Composer input?

I'm working on an app that applies Quartz Composer effects to QuickTime movies. Think Photo Booth, except with a QuickTime movie for the input, not a camera. Currently, I am loading a quicktime movie as a QTMovie object, then have an NSTimer firing 30 times a second. At some point I'll switch to a CVDisplayLink, but NSTimer is okay for now. Every time the NSTimer fires, the app grabs one frame of the quicktime movie as an NSImage and passes it to one of the QCRenderer's image inputs. This works, but is extremely slow. I've tried pulling frames from the movie in all of the formats that [QTMovie frameImageAtTime:withAttributes:error:] supports. They are all either really slow, or don't work at all.
I'm assuming that the slowness is caused by moving the image data to main memory, then moving it back for QC to work on it.
Unfortunately, using QC's QuickTime movie patch is out of the question for this project, as I need more control of movie playback than that provides. So the question is, how can I move QuickTime movie images into my QCRenderer without leaving VRAM?
Check out the v002 Movie Player QCPlugin which is open source. Anyway, what more controls do you have exactly?

AVPlayerLayer - ReProgramming the Wheel?

I'm currently using an AVPlayer, along with an AVPlayerLayer to play back some video. While playing back the video, I've registered for time updates every 30th of a second during the video. This is used to draw a graph of the acceleration at that point in the video, and have it update along with the video. The graph is using the CMTime from the video, so if I skip to a different portion of the video, the graph immediately represents that point in time in the video with no extra work.
Anywho, as far as I'm aware, if I want to get an interface similar to what the MediaPlayer framework offers, I'm going to have to do that myself.
What I'm wondering is, is there a way to use my AVPlayer with the MediaPlayer framework? (Not that I can see.) Or, is there a way to register for incremental time updates with the MediaPlayer framework.
My code, if anyone is interested, follows :
[moviePlayer addPeriodicTimeObserverForInterval: CMTimeMake(1, 30) queue: dispatch_queue_create("eventQueue", NULL) usingBlock: ^(CMTime time) {
loopCount = (int)(CMTimeGetSeconds(time) * 30);
if(loopCount < [dataPointArray count]) {
dispatch_sync(dispatch_get_main_queue(), ^{
[graphLayer setNeedsDisplay];
});
}
}];
Thanks!
If you're talking about the window chrome displayed by MPMoviePlayer then I'm afraid you are looking at creating this UI yourself.
AFAIK there is no way of achieving the timing behaviour you need using the MediaPlayer framework, which is very much a simple "play some media" framework. You're doing the right thing by using AVFoundation.
Which leaves you needing to create the UI yourself. My suggestion would be to start with a XIB file to create the general layout; toolbar at the top with a done button, a large view that represents a custom playback view (using your AVPlayerLayer) and a separate view to contain your controls.
You'll need to write some custom controller code to automatically show/hide the playback controls and toolbar as needed if you want to simulate the MPMoviePlayer UI.
You can use https://bitbucket.org/brentsimmons/ngmovieplayer as a starting point (if it existed at the time you asked).
From the project page: "Replicates much of the behavior of MPMoviePlayerViewController -- but uses AVFoundation."
You might want to look for AVSynchronizedLayer class. I don't think there's a lot in the official programming guide. You can find bits of info here and there: subfurther, Otter Software.
In O'Really Programming iOS 4 (or 5) there's also a short reference on how to let a square move/stop along a line in synch with the animation.
Another demo (not a lot of code) is shown during WWDC 2011 session Working with Media in AV Foundation.