I am building a command line tool using only command line tools (mainly clang) in ObjC++ using AudioUnit v2(C) API. Output to speakers works fine but the input from microphone callback is never invoked. The iTerm or Terminal hosts have access according to Settings. The executable also has an embedded info.plist although I do not think this is relevant.
The precise security model is not clear to me, it looks like a major security hole if it worked (anything run from terminal would have access): my guess is that the process launched by an "App" has permissions which then propagate to any child process. However this view is confused by another case where an executable I generate does network access (as it happens only to localhost because it is a regression test) and in this case the executable is asking for network access, not the terminal.
The source code is actually written in Felix which is translated to C++ and then compiled and linked by clang with the -ObjC option so embedded Objective C is supported. The translator is mature enough to have reasonable confidence in its correctness in this kind of simple application. The AudioUnit configuration for the microphone input is:
// configure
var outputElement = 0u32;
var inputElement = 1u32;
// establish callback
status = AudioUnitSetProperty(
outputAudioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
inputElement,
(&inputCallback).address,
C_hack::sizeof[AURenderCallbackStruct].uint32
);
assert noErr == status;
and the inputElement is enabled and outputElement disabled. A second audio unit is constructed later with similar technology which pumps a sine wave to the speakers and that works fine. The actual callback just prints a diagnostic and exits, but the diagnostic is never seen. Originally, the terminal had no permissions, and we guessed the code was correct but failed due to lack of permission to access the microphone. The executable still has no permission but the terminal does now (if I try to run the executable from file manager a terminal pops up).
No errors are reported at any stage. The callback simply isn't invoked.
To get a callback, you need to
enable IO
set the audio unit input device
Number 2. trips people up because it's not necessary for output [which sensibly defaults to the default output device], nor is it necessary on iOS, probably because there is no concept of Audio Device there, at least not in the AudioUnit API.
Surprisingly, both these requirements are actually documented! Technote 2091 covers the steps needed to record audio using AudioUnits and code listings 3. and 4. have sample code that enables IO and sets the input device. Listing 4. sets the audio unit input device to whatever the default input device was, but any input device will do.
Since macOS Mojave (10.14), you need an NSMicrophoneUsageDescription string in your Info.plist. Without this, your app is aborted with an exception. With this, the user is shown a prompt requesting permission to access input devices. You can control when this happens using code found here.
For a command line tool, you can embed an Info.plist file during the link stage.
On Catalina you also seem to need to opt into audio-input enabled sandboxing or the hardened runtime (or both!). Without one of these your callback is called, but with silence! Both of these runtime environments are enabled using "entitlements" which are metadata that is embedded in your app via codesigning, so you will need some form of codesigning. I don't think this necessarily means you will need a certificate from Apple, there is "local/ad-hoc" code signing, which seems to embed entitlements without a certificate, although I'm not sure how distributable the resulting binaries will be.
Related
Hello I'm using neodynamic weblientprint to print directly from browser. There's a part that includes a script that detects if the client utility is installed, after which its retrives the printers installed on the system.
However I intermittently get this error when the script runs - Unable to launch 'webclientprintvi: -getPrinters: because a user gesture is required.
I get this error on some client system browsers, while some client systems browsers never show this error. Is there a setting that needs to be configured on the browser. Browsers tests are chrome and edge
Thanks.
I discovered that if you are using the WCPP detection feature, then calling jsWebClientPrint.getPrinters() will produce the "...because a user gesture is required" console error. I had a page that called that method w/o the detection and it worked w/o complaining.
I didn't dive any deeper than that as to determining why the extra embedding detection causes this to happen. I just re-worked the page, calling the method instead on a Refresh button click; which, of course Chrome is then happen with.
I have a command line util the is running in root context (in macOS). This uses CoreLocation to determine the device's location. When this process is launched in High Sierra (in root context), the prompt requesting for user's permission doesn't come up. Also, the process isn't listed in System Preferences -> Privacy pane. Please note that the Command Line Util is signed. However, when the same util is run in user context, it works as expected.
Also, I tried manually editing the clients.plist. When I add the process to the plist, it lists up in System preferences. However, the CoreLocation delegates are not invoked. I see the following error in logs: "Registration timer expired, but client is still registering"
Has apple blocked root processes from collecting location data?
P.S: The code is in objective C
It's generally not safe to run Objective-c code as root, code-injection is very easy due to the dynamic nature of the language.
The permission for location access is given by the user, and root has no login account, so the prompt can't appear. You'll either have to enable root login (but see above) or run the location portions of you code in user space and communicate with a system daemon.
I am writing a program to view an MJPEG stream with vlc. When running vlc directly through the command line I get the error message [mjpeg # 0x10203ea00] No JPEG data found in image over and over again (with different pid's). I would like to get rid of this, as I think all of that text output is bogging down my program (and makes my text output impossible to see as it is gone about .5 seconds after it is written to console)
I am connecting to blue iris, and am implementing my program with vlcj.
http://10.10.80.39:8080/mjpg/cam1/video.mjpeg I have tried all of the quiet options that I can find, set verbosity to 0, I am at a loss on how to ignore this error.
I am running vlc 2.1. The error happens on multiple computers, and multiple os's.
You simply can't disable everything that vlc, or the libraries that vlc depends on, may emit. Not all of the log/error messages you see can be controlled by setting vlc's log level.
For me the problem is mainly libdvdnav spewing irrelevant messages to stderr.
You say you're using vlcj, well I too wanted a way to easily ignore those error messages from inside my Java applications. With the latest vlcj-git (at time of writing), there is an experimental NativeStreams [1] class that might help you.
This class uses JNA to wrap the "C" standard library and programmatically redirect either or both of the native process stdout and stderr streams.
You can not simply redirect System.out and System.err as some might expect because clearly these messages are coming from the native code outside of the JVM and that of course does not use System.out or System.err.
You can redirect to log files (which could just keep growing), or to "/dev/null".
The downside is that if you redirect the native streams, you also inevitably redirect the corresponding Java stream - you would lose your own application output. In my own applications that's not a problem because I log to stdout (which I don't redirect), whereas the vlc messsages I don't want fortuitously go to stderr (which I redirect).
You could also just redirect your java process output streams in the usual way when you launch the JVM. I wanted to be able to do this programmatically rather than having to write a shell script.
So it's not an ideal solution, but it works for me (only tested on Linux).
[1] https://github.com/caprica/vlcj/blob/a95682d5cd0fd8ac1d4d9b7a768f4b5600c87f62/src/main/java/uk/co/caprica/vlcj/runtime/streams/NativeStreams.java
I'm looking to sandbox an app to comply with the March 1st sandboxing requirement of the Mac App Store. My app includes a built-in terminal emulator which utilizes a forkpty() call to launch processes in a pseudo-tty environment. Unfortunately, this call fails under the sandbox with the error "Operation not permitted", although the fork() call works just fine. Presumably the forkpty() call requires read/write access to the /dev/ directory to create a pseudo-tty (according to the man page). I've tried adding a temporary sandboxing entitlement (com.apple.security.temporary-exception.files.absolute-path.read-write) with read/write access to /, and I now can indeed read and write files anywhere on the file system, but the forkpty() call still fails with the same error. Does anyone know how I might get forkpty() to work under the sandbox?
My app is a programming text editor with a built-in terminal emulator and file browser, so it essentially needs to have access to the entire file system. Apart from the forkpty() problem, this temporary entitlement seems to do what I need. But will Apple accept an app with such a loosely defined temporary exception entitlement?
Thanks in advance guys. I really hope I can get this sandboxing up and running so I continue to distribute my app through the App Store.
It is impossible to implement a useful terminal emulator in a sandboxed application -- even after you add entitlements for the PTY devices, the shell ends up in the same sandbox as the app, preventing it from doing very much.
My iOS application, among its features, download files from a specific server. This downloading occurs entirely in the background, while the user is working on the app. When a download is complete, the resource associated with the file appears on the app screen.
My users report some misbehavior about missing resources that I could not reproduce. Some side information leads me to suspect that the problem is caused by the download of the resource's file to be aborted mid-way. Then the app has a partially downloaded file that never gets completed.
To confirm the hypothesis, to make sure any fix works, and to test for such random network vanishing under my feet, I would like to simulate the loss of the network on my test environment: the test server is web sharing on my development Mac, the test device is the iOS simulator running on the same Mac.
Is there a more convenient way to do that, than manually turning web sharing off on a breakpoint?
Depending on how you're downloading your file, one possible option would be to set the callback delegate to null halfway through the download. It would still download the data, but your application would simply stop receiving callbacks. Although, I don't know if that's how the application would function if it truly dropped the connection.
Another option would be to temporarily point the download request at some random file on an external web server, then halfway though just disconnect your computer from the internet. I've done that to test network connectivity issues and it usually works. The interesting problem in your case is that you're downloading from your own computer, so disconnecting won't help. This would just be so you can determine the order of callbacks within the application when this happens, (does it make any callbacks at all? In what order?) so that you can simulate that behavior when actually pointed to your test server.
Combine both options together, I guess, to get the best solution.