Media Foundation - Custom Media Source & Sensor Profile - camera

I am writing an application for previewing, capturing and snapshotting camera input. To this end I am using Media Foundation for the input. One of the requirements is that this works with a Black Magic Intensive Pro 4K capture card, which behaves similar to a normal camera.
Media Foundation is unfortunately unable to create an IMFMediaSource object from this device. Some research lead me to believe that I could implement my own MediaSource.
Then I started looking at samples, and tried to unravel the documentation.
At that point I encountered some questions:
Does anyone know if what I am trying to do is possible?
A Windows example shows a basic implementation of a source, but uses IMFSensorProfile. What is a Sensor Profile, and what should I use it for? There is almost no documentation about this.
Can somebody explain how implementing a custom media source works in terms of: what actually happens on the inside? Am I simply creating my own format, or does it allow me to pull my own frames from the camera and process them myself? I tried following the msdn guide, but no luck so far.
Specifics:
Using WPF with C# but I can write C++ and use it in C#.
Rendering to screen uses Direct3D9.
The capture card specs can be found on their site (BlackMagic Intensity Pro 4K).
The specific problem that occurs is that I can acquire the IMFActivator for the device, but I am not able to activate it. On activation, an MF_E_INVALIDMEDIATYPE error occurs.
The IMFActivator can tell me that the device should output a UYVY format.
My last resort is using the DeckLinkAPI, but since I am working with several different types of cameras, I do not want to be stuck with another dependency.
Any pointers or help would be appreciated. Let me know if anything is unclear or needs more detail.

Related

Applying Non-Standard Power Assertions & Creating Virtual HIDs

I've got a big ask here, but I am hoping someone might be able to help me. If there's another site you think this should be posted on, please let me know.
I'm the developer of the free app Amphetamine for macOS and I'm hoping to add a new feature to the app - keeping a Mac awake while in closed-display (clamshell) mode while not having a keyboard/mouse/power adapter/display connected to the Mac. I get requests to add this feature on an almost daily basis.
I've been working on a solution (and it's mostly ready) which uses a non-App Store helper app that must be download and installed separately. I could still go with that solution, but I want to explore one more option before pushing the separate app solution out to the world.
An Amphetamine user tipped me off that another app, AntiSleep can keep a Mac awake while in closed-display mode, while not meeting Apple's requirements. I've tested this claim, and it's true. After doing a bit of digging into how AntiSleep might be accomplishing this, I've come up with 2 possible theories so far (though there may be more to it):
In addition to the standard power assertion types, it looks like AntiSleep is using (a) private framework(s) to apply non-standard power assertions. The following non-standard power assertion types are active when AntiSleep is keeping a Mac awake: DenySystemSleep, UserIsActive, RequiresDisplayAudio, & InternalPreventDisplaySleep. I haven't been able to find much information on these power assertion types beyond what appears in IOPMLibPrivate.h. I'm not familiar at all with using private frameworks, but I assume I could theoretically add the IOPMLibPrivate header file to a project and then create these power assertion types. I understand that would likely result in an App Store review rejection for Amphetamine, of course. What about non-App Store apps? Would Apple notarize an app using this? Beyond that, could someone help me confirm that the only way to apply these non-standard power assertions is to use a private framework?
I suspect that AntiSleep may also be creating a virtual keyboard and mouse. Certainly, the idea of creating a virtual keyboard and mouse to get around Apple's requirement of having a keyboard and mouse connected to the Mac when using closed-display mode is an intriguing idea. After doing some searching, I found foohid. However, I ran into all kinds of errors trying to add and use the foohid files in a test project. Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app? I'm not asking for code help with that (yet). I'd just like some help determining whether it might be possible to do.
Thank you in advance for taking a look.
Would Apple notarize an app using this?
I haven't seen any issues with notarising code that uses private APIs. Currently, Apple only seems to use notarisation for scanning for inclusion of known malware.
Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app?
Taking a quick glance at the code of that project, it's clear it implements a kernel extension (kext). Those are not allowed on the App Store.
However, since macOS 10.15 Catalina, there's a new way to write HID drivers, using DriverKit. The idea is that the APIs are very similar to the kernel APIs, although I suspect it'll be a rewrite of the kext as a DriverKit driver, rather than a simple port.
DriverKit drivers are permitted to be included in App Store apps.
I don't know if a DriverKit based HID driver will solve your specific power management issue.
If you go with a DriverKit solution, this will only work on 10.15+.
I suspect that AntiSleep may also be creating a virtual keyboard and mouse.
I haven't looked at AntiSleep, but I do know that in addition to writing an outright HID driver, it's possible to generate HID events using user space APIs such as IOHIDPostEvent(). I don't know if those are allowed on the App Store, but as far as I'm aware, IOKitLib is generally fine.
It's possible you might be able to implement your virtual input device using those.

Programmatically capture audio in OS X [duplicate]

I'm due to work on a small application that captures audio from the Mac's Audio Queue and needs to save it to disk in some reasonable audio format.
Does anyone have a some decent sample code (Cocoa / Objective-C) that they can share?
I specifically need to capture the audio that is being passed to the Built-in Output device in order to record it. Any insights? The answers so far have been helpful, but have not helped me understand how the data going to the output can be captured, agnostic of the input source.
Working with audio in Mac OS X involves interfacing with Core Audio. For a quick overview, take a look at the Core Audio Overview.
You will need to interface with the AUHAL to perform input and output; a technical note exists detailing the steps required to do so. This code seems to usually be written in C++, as that is the procedure taken in the SimplePlayThru demo.
This doesn't cover the actual steps required to capture that audio input. However, these links should provide you with enough sample code to begin interfacing with your input device. I'll post more links in this answer if I happen across them.
Take a look at /Developer/Example/CoreAudio/Services/AudioFileTools. Specifically, look at afrecord.cpp. Admittedly, this is not Cocoa per se; Cocoa itself doesn't seem to have any specific capabilities for recording. If you'll want to interface with the C++ file there, you'll likely need to write some Objective C++ like in SimplePlayThru.
There is a good example code at Ulli Kusterers Github Repository
Cocoadev also has an article about that topic. The source code at the bottom of the page uses QuickTimes Sequence Grabber API. I would go with Core Audio.

What is the nature of the gestures needed in Windows 8?

Most of touchpads on laptops don't handle multitouch, hence are not able to send swipe gestures to the OS.
Would it be possible to send some gestures to Windows from an external device, like a Teensy, or a recent Arduino, that can already emulate a keyboard and a mouse. I could send buttons 4 and 5 (mouse wheel up and down), but I would like to send a real swipe gesture (for example with a flex sensor...).
One of the ways that you could work with arduino and similar is to use the Microsoft .NET Microframework, which is an open source code, available for no cost from: Micro Framework
There are other frameworks available for the Artuino that you might want to use. So if you have a great idea on how to utilize the sensor hardware, then the output must meet certain specifications.
To be able to connect to your hardware that reads gestures, you will need to understand how drivers are created, so take a look at this: Info on drivers.
To find that type of information you would need to take a look at above link, this is for sensors, which would appear to be not quite what you are looking for, you are looking to use "gestures" but first you have to be able to make the connection to your device, this guide MIGHT help. I have reviewed it for other reasons.
There is a bunch of stuff to dig through, but first of all, imo, is to understand how to get your software to communicate with Windows 8. Let me know if you have any other questions. I am not the best person, you might want to refer to the community at the Micro Framework link shown above.
Good luck.
That's perfectly possible. What your effectively suggesting is that you want to create your own input peripheral like a trackpad and use that to send inputs. As long as windows recognizes this device as an input source it will work.

How can I change the audio output device in Objective-C?

I'm building a simple Cocoa app and I want to direct the audio output to a specific device, instead of the system selected one. I know some apps, like Skype, let you select where to send the output to. How do they do this?
I tried the MTCoreAudio framework but I can't even compile my app (or their AudioMonitor demo) with it included and the errors aren't helpful (_objc_fatal). Are there any complete examples that I can learn from? So far my searches haven't turned anything up.
Thanks!
The CAPlayThrough example on the Mac Dev Center Sample Code library shows how to list all of the available input and output devices, and select a default device from a menu.
Have you looked through the sample code on http://developer.apple.com ?
Look at these projects http://developer.apple.com/mac/library/navigation/index.html?section=Resource+Types&topic=Sample+Code
Namely the DefaultAudioUnit project.
I should say that working with Core Audio is more challenging than Cocoa. Most of the API's are C-based (I find that harder). You should read the Core Audio programming guide as well to get a sense of how the audio system is put together.

Code sample for capturing audio from a Mac in Cocoa and saving to file?

I'm due to work on a small application that captures audio from the Mac's Audio Queue and needs to save it to disk in some reasonable audio format.
Does anyone have a some decent sample code (Cocoa / Objective-C) that they can share?
I specifically need to capture the audio that is being passed to the Built-in Output device in order to record it. Any insights? The answers so far have been helpful, but have not helped me understand how the data going to the output can be captured, agnostic of the input source.
Working with audio in Mac OS X involves interfacing with Core Audio. For a quick overview, take a look at the Core Audio Overview.
You will need to interface with the AUHAL to perform input and output; a technical note exists detailing the steps required to do so. This code seems to usually be written in C++, as that is the procedure taken in the SimplePlayThru demo.
This doesn't cover the actual steps required to capture that audio input. However, these links should provide you with enough sample code to begin interfacing with your input device. I'll post more links in this answer if I happen across them.
Take a look at /Developer/Example/CoreAudio/Services/AudioFileTools. Specifically, look at afrecord.cpp. Admittedly, this is not Cocoa per se; Cocoa itself doesn't seem to have any specific capabilities for recording. If you'll want to interface with the C++ file there, you'll likely need to write some Objective C++ like in SimplePlayThru.
There is a good example code at Ulli Kusterers Github Repository
Cocoadev also has an article about that topic. The source code at the bottom of the page uses QuickTimes Sequence Grabber API. I would go with Core Audio.