How to capture App Screen as Video with Audio in Mac OSX? - objective-c

I am writing a MacOS or OSX application where I need to record only the View of my application (Not the Whole display) with the Audio it emits.
Think it as a game app and I need to record the complete GamePlay View Of the Application.How should I go about doing this?
I am aware of "AVCaptureScreenInput" and, the example. But how to capture only the view of my application?

From the website you posted:
Note: By default, AVCaptureScreenInput captures the entire screen. You may set its cropRect property to limit the capture rectangle to a subsection of the screen.
Just set this property to the windows/views rect and you're done
Of course you need to update and restart the recording when the windows/views rect changes.

Read the document carefully, there is a comment, about the displays:
// If you're on a multi-display system and you want to capture a secondary display,
// you can call CGGetActiveDisplayList() to get the list of all active displays.
// For this example, we just specify the main display.
// To capture both a main and secondary display at the same time, use two active
// capture sessions, one for each display. On Mac OS X, AVCaptureMovieFileOutput
// only supports writing to a single video track.

Related

Clear preview window in media foundation

Is it possible to clear a preview window after preview from camera is done? I am using MFCaptureEngine, calling m_pPreview->SetRenderHandle(m_hwnd) to render the video. But when I stop the video I am not able to draw on the window. There remains a last frame from the camera. I need to fill the window by black brush and draw some text, but the image from the camera cannot be overdrawn.
It is not clear understand from you answer what is it MFCaptureManager, but by code SetRenderHandle(m_hwnd) I see that you use IMFCapturePreviewSink::SetRenderHandle. So, I can say that I had faced with similar problem some time ago, and it is related with difference between of the old window system which exist from WinXP and current window system from Vista. Code sets window context to the renderer by calling IMFCapturePreviewSink::SetRenderHandle - for IMFCapturePreviewSink it is DirectX11 - and DirectX11 has got FULL access to the window and it is switched to current window system. As a result, any calling fill the window by black brush and draw some text which is done by old Windows API from Win95-XP generation do nothing - because window handler context is LOCKED by DirectX11.
There are three ways for resolving of this problem:
Write the new UI by the new Microsoft DirectComposition GUI API which is based on DirectX11 and set it to IMFCapturePreviewSink::SetRenderSurface.
Create EVR Media Sink by MFCreateVideoRenderer - it creates DirectX9 video renderer which is compatible with old Windows API from Win95-XP generation, and set this IMFMediaSink in IMFCapturePreviewSink::SetCustomSink.
Create code of the video renderer on DirectX9 base - for example MFCaptureD3D/device.cpp, and draw raw IMFSample from callback IMFCapturePreviewSink::SetSampleCallback.
Regards.
I've implemented it this way:
// Sink
CComPtr<IMFCaptureSink> pSink;
m_pEngine->GetSink(MF_CAPTURE_ENGINE_SINK_TYPE_PREVIEW, &pSink);
CComPtr<IMFMediaSink> pCustomSink;
::MFCreateVideoRenderer(IID_IMFMediaSink, (void**)&pCustomSink);
CComPtr<IMFCapturePreviewSink> pPreviewSink;
pSink.QueryInterface(&pPreviewSink);
pPreviewSink->SetCustomSink(pCustomSink);
// preview
pSink.QueryInterface(&m_pPreview); // or pPreviewSink.QueryInterface(&m_pPreview)
m_pPreview->SetRenderHandle(m_hwndPreview);
But the behaviour is still the same (the screen cannot be redrawn after the preview is stopped).

How to programmatically start front camera of iPad?

I would like to start front camera of the iPad when app starts.
How do I do it programmatically?
Please let me know.
First thing you need to do is to detect if your device has got front-facing camera. For that you need to iterate through the video devices.
Try this method of UIImagePickerController:
+ (BOOL)isCameraDeviceAvailable:(UIImagePickerControllerCameraDevice)cameraDevice
This is a class method and UIImagePickerControllerCameraDevice can take two values:
- UIImagePickerControllerCameraDeviceRear
- UIImagePickerControllerCameraDeviceFront
Example code:
if( [UIImagePickerController isCameraDeviceAvailable: UIImagePickerControllerCameraDeviceFront ])
{
// do something
}
Note that this is available for iOS 4.0 and later.
Also I am not sure if there is any API's to start the front-facing camera up front. The camera always seems to start in the same mode that the user left it the last time it was used. Maybe by design Apple did not expose any API's to change this. Maybe Apple wanted the users to make a call on this.
Nevertheless you can atleast detect the availability of Fron Camera & provide your feature.
If I understand your question correctly, all you have to do is open your Camera to be in Front Mode instead of Rear Mode, so write this inside the method where you call the picker for the first time:
picker.cameraDevice=UIImagePickerControllerCameraDeviceFront;
Hope this answers your question.

Is it possible play multiple clips using presentMoviePlayerViewControllerAnimated?

I have a situation where I'd like to play 2 video clips back to back using an MPMoviePlayerViewController displayed using presentMoviePlayerViewControllerAnimated.
The problem is that the modal view automatically closes itself as soon as the first movie is complete.
Has anyone found a way to do this?
Three options:
You may use MPMoviePlayerController and start the playback of the 2nd (Nth) item after the previous is complete. This however will introduce a small gap between the videos cause by identification and pre buffering of the content.
You may use AVQueuePlayer; AVQueuePlayer is a subclass of AVPlayer you use to play a number of items in sequence. See its reference for more.
You may use AVComposition for at runtime composing one video out of the two (or N) you need to play back. Note, this works only on locally stored videos and not on remote (streaming or progressive download). Then use AVPlayer for the playback.
It's not possible. If the video assets are in local file system, consider AVComposition.

NSScreenNumber changes (randomly)?

In my application I need to distinguish between different displays, which I do by using the NSScreenNumber key of the deviceDescription dictionary provided by NSScreen. So far everything worked flawlessly, but now all of a sudden I sometimes get a different screen ID for my main screen (it's a laptop and I haven't attached a second screen in months, its always the same hardware). The id used to be 69676672 but now most of the time I get 2077806975.
At first I thought I might be misinterpreting the NSNumber somehow, but that doesn't seem to be the case, I also checked by using the CGMainDisplayID() function and I get the same value. What is even weirder is that some of the Apple applications still seem to get the old ID: Eg. the desktop image is referenced in its config file using the screen ID and when updating the desktop image the desktop image app by Apple uses the "correct" (=old) ID.
I am starting to wonder if there might have been a change in a recent update (10.7.1 or 10.7.2) that led to the change, has anybody else noticed something similar or had this issue before?
Here is the code that I use:
// This is in an NSScreen category
- (NSNumber *) uniqueScreenID {
return [[self deviceDescription] objectForKey:#"NSScreenNumber"];
}
And for getting an int:
// Assuming screen points to an instance of NSScreen
NSLog(#"Screen ID: %i", [[screen uniqueScreenID] intValue]);
This is starting to get frustrating, appreciate any help/ideas, thanks!
For Mac's that have built-in graphics and discrete graphics cards (such as MacBook Pro models with on-board Intel graphics and a separate graphics card), the display ID can change when the system automatically switches between the two. You can disable "Automatic graphics switching" in the Energy Saver prefs panel to test whether this is the cause of your screen number changes (when disabled, will always use the discrete graphics card).
On such systems, the choice of which graphics is in use at a particular time is tied to the applications that are currently running and their needs. I believe any use of OpenGL by an application would cause a switch to the discrete graphics card, for instance.
If you need to notice when such a switch occurs while your application is running, you can register a callback (CGDisplayRegisterReconfigurationCallback) and examine the changes that occur (kCGDisplayAddFlag, kCGDisplayRemoveFlag, etc). If you're trying to match a display to one previously used/encountered, you would need to go beyond just comparing display id's.

AVPlayerLayer - ReProgramming the Wheel?

I'm currently using an AVPlayer, along with an AVPlayerLayer to play back some video. While playing back the video, I've registered for time updates every 30th of a second during the video. This is used to draw a graph of the acceleration at that point in the video, and have it update along with the video. The graph is using the CMTime from the video, so if I skip to a different portion of the video, the graph immediately represents that point in time in the video with no extra work.
Anywho, as far as I'm aware, if I want to get an interface similar to what the MediaPlayer framework offers, I'm going to have to do that myself.
What I'm wondering is, is there a way to use my AVPlayer with the MediaPlayer framework? (Not that I can see.) Or, is there a way to register for incremental time updates with the MediaPlayer framework.
My code, if anyone is interested, follows :
[moviePlayer addPeriodicTimeObserverForInterval: CMTimeMake(1, 30) queue: dispatch_queue_create("eventQueue", NULL) usingBlock: ^(CMTime time) {
loopCount = (int)(CMTimeGetSeconds(time) * 30);
if(loopCount < [dataPointArray count]) {
dispatch_sync(dispatch_get_main_queue(), ^{
[graphLayer setNeedsDisplay];
});
}
}];
Thanks!
If you're talking about the window chrome displayed by MPMoviePlayer then I'm afraid you are looking at creating this UI yourself.
AFAIK there is no way of achieving the timing behaviour you need using the MediaPlayer framework, which is very much a simple "play some media" framework. You're doing the right thing by using AVFoundation.
Which leaves you needing to create the UI yourself. My suggestion would be to start with a XIB file to create the general layout; toolbar at the top with a done button, a large view that represents a custom playback view (using your AVPlayerLayer) and a separate view to contain your controls.
You'll need to write some custom controller code to automatically show/hide the playback controls and toolbar as needed if you want to simulate the MPMoviePlayer UI.
You can use https://bitbucket.org/brentsimmons/ngmovieplayer as a starting point (if it existed at the time you asked).
From the project page: "Replicates much of the behavior of MPMoviePlayerViewController -- but uses AVFoundation."
You might want to look for AVSynchronizedLayer class. I don't think there's a lot in the official programming guide. You can find bits of info here and there: subfurther, Otter Software.
In O'Really Programming iOS 4 (or 5) there's also a short reference on how to let a square move/stop along a line in synch with the animation.
Another demo (not a lot of code) is shown during WWDC 2011 session Working with Media in AV Foundation.