Screen record with sound - AVFoundation? Desktop Mac - objective-c

I was trying to create two things. Both for desktop mac. Both which involve recording screen/audio.
In first thing, which is my main priority right now, I am making a song identifier. The second thing, is a screen capture (with audio) thing.
I was thinking of using AVFoundation. I don't see any sound recording capabilities though, just playing - https://developer.apple.com/library/mac/documentation/AVFoundation/Reference/AVAudioPlayerClassReference/index.html#//apple_ref/doc/uid/TP40008067
Is it possbile to record system audio somehow?
Thanks

I've used this document in the past to figure out the live screen recording part. https://developer.apple.com/library/mac/qa/qa1740/_index.html
You'll probably also find the code snipped in the AVCaptureSession overview useful.
The gist of it is that AVCaptureSession is the object that controls all your inputs and outputs for the given capture session. In this case it would be AVCaptureScreenInput and I believe for audio you want AVCaptureDeviceInput of type audio. There is a way to get the list of all the available devices for a AVCaptureDevice of a specific type. Then you add AVCaptureMovieFileOutput to your session output.
I know that's a little high level, but that technical Q&A as well as looking into getting particular input types should help.

Related

How to work with voice over in objective-C?

Its the first time when I work with voice over on objective c. i'm trying to make some simple app for blind community.
tell me necessary classes and methods to work with voice?
what should my application to except play and pause voice?
please show me main protocols and methods with examples.
Full tutorial will be appreciated :-)
video will be perfect
You just need to make the elements in your app accessible in the accessibility tree. By default they are all set to YES, so all the elements are ready by voice over. You don't have to write any code for that.
However you need to write code to post accessibility notifications, and to make some elements not read by voice over.
You can change the voice over settings in the device accessibility settings.
Please read the Apple's Documentation regarding the UIAccessibility.

Capture screen and audio with objective-c

I'm using an AVCaptureSession to create a screen recording (OSX), but i'd also like to add the computer audio to it (not the microphone, but anything that's playing through the speakers) i'm not really sure how to do that so the first thing i tried was adding an audio device like so:
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
After adding this device the audio was recorded, but it sounded like it was captured through the microphone. Is it possible to actually capture the computer's output sound this way? like quicktime does.
Here's an open source framework that supposedly makes it as easy to capture speaker output as it is a screenshot.
https://github.com/pje/WavTap
The home page for WavTap does mention that it requires kernel extension signing privileges to run under MacOS 10.10 & newer, and that requires signing into your Apple Developer Account and submitting this form. More information can be found here.

DSC-HX400 RAW image data & Movie Recording

I am currently testing a DSC-HX400. While I am able to do almost everything I need to with the camera there are a couple of items that are not exposed via the API that have frustrated my efforts.
1) The camera does not seem to offer an option, via the API or the camera itself, to capture images in RAW format. It does offer standard & fine JPEG format but both of those are leaving artifacts in the image that become extremely noticeable when you zoom in with an image editor. Is there a way to get the camera to capture RAW images? I do not need the SDK to return the data just to save it out to the card. If getting the RAW data is impossible has anyone found an inventive way to clean up the artifacts?
2) The camera supports both still shoot and movie mode but the API will only expose the mode that I am currently in. It makes it impossible to transition between still to movie mode (to allow recording) from the API but I can do that same transition by pressing a single button on the camera. Once I am recording a movie the API will allow me to transition back to still mode (by cancelling recording). Is there plans to support the ability to trigger a movie recording via the API if you are in a still capture mode (Seeing the firmware already supports this functionality)?
Answers to the questions below:
If the camera cannot capture RAW images, the API will not be able to either. I do not know of a way to capture RAW images but can only comment with regards to the API as I am not an expert on usage of the camera itself.
You can change between still and movie mode by using the "setShootMode" API.

I cannot get a QTCaptureSession to Capture when in a Terminal Application

I've got a terminal application that needs to take a webcam picture and then perform some processing on it. I'm having trouble getting it to initialize. There's a fairly complete demo with an app called MyRecorder in the Apple docs that uses QTKit, which I was able to make work fine. I was also able to modify it to grab a single frame instead of a stream.
When I move this to a terminal application, the startRunning of the QTCaptureSession command simply does nothing. There are no errors, and everything reports as successful, but my webcam doesn't light up, and no frames are captured.
Any idea what's going on here? Are there any kind of security restrictions, or other kinds of restrictions that would prevent the QTCaptureSession from working?
So switching to AVFoundation solved my problem. I'm still not certain what the issue is, but for now using AVFoundation seems like the way to go since it was designed to replace QtKit anyways.

MPMoviePlayerController mp4 file streaming

I have few links to .mp4 video files like
file1.mp4
file2.mp4
file3.mp4
I need to play them all in player as one file. Actually not necessarily "as one" file, the player must act like it's one file. My best guess is to create custom controls and playback area for MPMoviePlayerController and divide the playback by time slices.
For instance
file1.mp4 file2.mp4 file3.mp4
-----------|------------|------------
Is this a good approach? Can this be done anyhow easier?
Also, the server, from which I'll get the videos is not customizable and I can't convert videos to MPEG-2 and stream them via .m3u8 files.
Thanks in advance
I guess you can use AVQueuePlayer. It supports multi-item playback. Haven't tried that myself (I used AVPlayer for single-item playback). I believe that AVQueuePlayer usage should reduce your overall efforts. ( You will still be responsible for drawing playback controls )
I sticked up to the scenario I described in the question and was able to create the player component.