Detecting, and Shirking Off, the Debugger - objective-c

An interesting feature in the new iTunes is it's inability to accept debugger processes that are attached to it (crippling tools like F-Script) Not only would this involve a detection method, but it would require some kind of process that was either checking for the debugger attaching itself mid-run, or an entry-point method that the debugger would emit when it attempts to attach itself. In addition, it would need a way to tell the debugger to go away (as it were) without terminating the process. The question is: How? Clearly, polling for a debugger every X number of seconds is inefficient, and not allowing it to attach to a given process (sans override like ptrace()) seems intensely private.

iTunes is calling ptrace(PT_DENY_ATTACH) which sets the P_LNOATTACH flag which stops debuggers (and other processes, e.g. F-Script and DTrace) from attaching to the process.
See Is it possible to conceal a OS X app from DTrace? for more information.
I wouldn't be surprised if iTunes is also actively using detection methods to identify debuggers. Apple have gone to great lengths to try to protect the DRM in iTunes.
There are a number of books that have methods of securing Cocoa applications, including detecting debuggers. Some potential titles that spring to mind (I haven't double checked the contents of these so don't assume they have detection methods): "Mac Hacker's Handbook", "Hacking and Securing iOS Applications", "Professional Cocoa Application Security" and
"Secure Programming Cookbook for C & C++".
"Mac OS X Internals" and "Mac OS X and iOS Internals" might have something on PT_DENY_ATTACH.

Related

Applying Non-Standard Power Assertions & Creating Virtual HIDs

I've got a big ask here, but I am hoping someone might be able to help me. If there's another site you think this should be posted on, please let me know.
I'm the developer of the free app Amphetamine for macOS and I'm hoping to add a new feature to the app - keeping a Mac awake while in closed-display (clamshell) mode while not having a keyboard/mouse/power adapter/display connected to the Mac. I get requests to add this feature on an almost daily basis.
I've been working on a solution (and it's mostly ready) which uses a non-App Store helper app that must be download and installed separately. I could still go with that solution, but I want to explore one more option before pushing the separate app solution out to the world.
An Amphetamine user tipped me off that another app, AntiSleep can keep a Mac awake while in closed-display mode, while not meeting Apple's requirements. I've tested this claim, and it's true. After doing a bit of digging into how AntiSleep might be accomplishing this, I've come up with 2 possible theories so far (though there may be more to it):
In addition to the standard power assertion types, it looks like AntiSleep is using (a) private framework(s) to apply non-standard power assertions. The following non-standard power assertion types are active when AntiSleep is keeping a Mac awake: DenySystemSleep, UserIsActive, RequiresDisplayAudio, & InternalPreventDisplaySleep. I haven't been able to find much information on these power assertion types beyond what appears in IOPMLibPrivate.h. I'm not familiar at all with using private frameworks, but I assume I could theoretically add the IOPMLibPrivate header file to a project and then create these power assertion types. I understand that would likely result in an App Store review rejection for Amphetamine, of course. What about non-App Store apps? Would Apple notarize an app using this? Beyond that, could someone help me confirm that the only way to apply these non-standard power assertions is to use a private framework?
I suspect that AntiSleep may also be creating a virtual keyboard and mouse. Certainly, the idea of creating a virtual keyboard and mouse to get around Apple's requirement of having a keyboard and mouse connected to the Mac when using closed-display mode is an intriguing idea. After doing some searching, I found foohid. However, I ran into all kinds of errors trying to add and use the foohid files in a test project. Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app? I'm not asking for code help with that (yet). I'd just like some help determining whether it might be possible to do.
Thank you in advance for taking a look.
Would Apple notarize an app using this?
I haven't seen any issues with notarising code that uses private APIs. Currently, Apple only seems to use notarisation for scanning for inclusion of known malware.
Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app?
Taking a quick glance at the code of that project, it's clear it implements a kernel extension (kext). Those are not allowed on the App Store.
However, since macOS 10.15 Catalina, there's a new way to write HID drivers, using DriverKit. The idea is that the APIs are very similar to the kernel APIs, although I suspect it'll be a rewrite of the kext as a DriverKit driver, rather than a simple port.
DriverKit drivers are permitted to be included in App Store apps.
I don't know if a DriverKit based HID driver will solve your specific power management issue.
If you go with a DriverKit solution, this will only work on 10.15+.
I suspect that AntiSleep may also be creating a virtual keyboard and mouse.
I haven't looked at AntiSleep, but I do know that in addition to writing an outright HID driver, it's possible to generate HID events using user space APIs such as IOHIDPostEvent(). I don't know if those are allowed on the App Store, but as far as I'm aware, IOKitLib is generally fine.
It's possible you might be able to implement your virtual input device using those.

Utilizing resources of separate apps on OS X

Is it possible to write software on the mac that will launch other separate applications, tell them what to do, etc etc? What language would best suite doing this? (Assuming it's Mac specific).
I'm fairly new to Mac programming, though I have a strong background in iOS. I've seen multiple companies in the past write a script that will cross-compile source code, basically you run your app from the Terminal and it floats around your OS, grabbing what it needs to compile and spits out an Xcode, Eclipse and Unity-ready versions of your source code. I'm familiar with iOS, and how it crashes the second you try to use another apps resources. That is what leads me to the original question:
Is it possible to write software on the mac that will launch other separate applications, tell them what to do, etc etc? Specifically tasks like launch Safari, take a screenshot, launch disk utility, launch mail, email screenshot. I know that OS X allows you to play around a bit more than iOS, but the question is how much.
It's still rather up to what each app will let you do, rather than just being able to do anything, but take a look at OS X's scripting/automation abilities. Primarily this is accessed through AppleScript, but there's now a JavaScript frontend as well (new to Yosemite).
If you're looking at building a native application that takes advantage of other applications, the same scripting abilities can also be reached via the Scripting Bridge from Cocoa (Objective-C/Swift).

NSBundle sandboxing in Objective-C

Odd title, I know. What I REALLY mean to ask is something like "can I create a TrustedBSD MAC Framework or iOS sandbox for NSBundles?" I have a very dynamic application in which modules can be "plugged in" or "un plugged" to a "switchboard" at any time. The user operating it is on another computer, perhaps miles away. There's no way for users to know how the switchboard modules can be compliant and not wreck the system in their absence. I know iOS sandboxing and TrustedBSD (as well as Lion's Seatbelt.framework) can do this for processes, but how can I do this for bundle code?
Things I've thought of:
Static binary analysis of all posix calls and ObjC messaging (seems impossible as binaries are.... binary at this point, not code).
Tracing ObjC messages at runtime and logging them, locking the module out after a single stray call. (This wouldn't affect POSIX calls or any C calls for that matter, nor assembly, and would have at minimum one call already sent).
Create a sandboxed external process using XPC and load modules there, and use PDO to route switchboard calls to that process (very doable, but I need snow leopard compatibility here).
Any Ideas? I realize that a bundle, once loaded, ends up as PART of your application, so this adds further problems on implementation of security.,
The instant that you load a bundle into your process, you are giving it full control over your process. (This is because a bundle can contain code that runs at initialization time, before you even call any of its functions.)
Tracing messages at runtime is pointless, as a malicious bundle could simply disable the code responsible for message tracing before it fires. (It's running in the same memory space, after all, and it needn't use any ObjC messages to do this.)
In short: do not load bundles you do not trust. If you must make use of an untrusted bundle, use some form of sandboxing (whether that's XPC or something else).

How is it possible to access function of app A from app B

I was wondering if and in how many way an app can access specific funcions of another app.
for example
open an url in safari/firefox/chrome
run a javascript in current browser-tab
play/pause itunes
rename selected files in Finder
I am aware of the existence of applescript but i was wondering if that's the only way i have to interact with those apps and others
thanks
There are three main ways an app exposes its function to the outside world.
One is by supporting an URL protocol. To open an URL, just use NSWorkspace. There are many methods; if an app registers a specific protocol like x-my-app://some-work, you can just do
[[NSWorkspace sharedWorkspace] openURL:[NSURL URLWithString:#"x-my-app://some-work"] ];
If you want to open an URL whose protocol (say http) is supported by many apps and if you want to specify which app to use, use openURLs:withAppBundleIdentifier:options:additionalEventParamDescriptor:launchIdentifiers:
.
Another is the System Services. With this, an app can add entries in the Service menu and in the context menu of other apps; you can also call it programmatically.
Otherwise, it's via Apple events. Applescript is one way to deal with them, but not the only one. It's just a language to issue Apple events. There are many ways to deal with Apple events from Cocoa, see this detailed document by Apple.
Basically, an app can export its internal as an object-oriented manner (which is not just its Objective-C hierarchy; you can control how much of its internal objects and methods you expose, etc.) by an sdef file. Then, another app can use this object-oriented system via Apple events.
To send and receive Apple events, you can of course construct them by hand, but you can use higher-level objects like
Applescript via NSAppleScript
Scripting Bridge
or AppScript.
To learn what kind of aspects an app exposes, just open the AppleScript Editor and choose the menu File → Open Dictionary, and choose an app.
Now, it's rather hard to use features of an app which the app does not expose via any of these methods. You still have a few workaround.
UI Scripting. This is done by sending Apple Events to a headless app called System Events which is one of the core program in OS X. This way, you can programmatically emulate clicking a button, choosing a menu, etc. of another app. So, almost whatever you can do using GUI with another app can be done programmatically from another app. To see the hierarchy of UI objects accessible from UI scripting, use a utility which comes with XCode tools, at
/Developer/Applications/Utilites/Accessibility Tools/Accessibility Inspector.app
This is very rudimentary but does the job; if you regularly use UI scripting, consider obtaining UI browser, as Zygmunt suggests.
Finally, if you want to use a non-GUI non-exposed feature of another app, you can inject a code into another app.
Just expanding on Yuji's answer. If you were forced to go the UI scripting path, there's a nice application to analyze the interface - hxxp://pfiddlesoft.com/uibrowser/. However, the examples you mentioned should expose some APIs.
I might also recommend using Sikuli hxxp://groups.csail.mit.edu/uid/sikuli/ as an IDE to script around user interface robustly.
For some applications usually coming from GNU/Linux there is D-BUS hxxp://en.wikipedia.org/wiki/D-Bus - although I haven't used it on a Mac on my own yet. And let me also quote Wikipedia about Cocoa "It is one of five major APIs available for Mac OS X; the others are Carbon, POSIX (for the BSD environment), X11 and Java." hxxp://en.wikipedia.org/wiki/Cocoa_%28API%29 That's just a loose tip for further exploration as Yuji has already explained Apple events that are key to your question.

Is this a reasonable "Application entry point"?

I have recently come across a situation where code is dynamically loading some libraries, wiring them up, then calling what is termed the "application entry point" (one of the libraries must implement IApplication.Run()).
Is this a valid "Appliation entry point"?
I would always have considered the application entry point to be before the loading of the libraries and found the IApplication.Run() being called after a considerable amount of work slightly misleading.
The terms application and system are terms that are so widely and diversely used that you need to agree what they mean upfront with your conversation partner. E.g. sometimes an application is something with a UI, and a system is 'UI-less'. In general it's just a case of you say potato, I say potato.
As for the example you use: that's just what a runtime (e.g. .NET or java) does: loading a set of libraries and calling the application entry point, i.e. the "main" method.
So in your case, the code loading the libraries is doing just the same, and probably calling a method on an interface, you could then consider the loading code to be the runtime for that application. It's just a matter of perspective.
The term "application" can mean whatever you want it to mean. "Application" merely means a collection of resources (libraries, code, images, etc) that work together to help you solve a problem.
So to answer your question, yes, it's a valid use of the term 'application'.
Application on its own means actually nothing. It is often used by people to talk about computer programs that provide some value to the user. A more correct term is application software and this has the following definition:
Application software is a subclass of
computer software that employs the
capabilities of a computer directly
and thoroughly to a task that the user
wishes to perform. This should be
contrasted with system software which
is involved in integrating a
computer's various capabilities, but
typically does not directly apply them
in the performance of tasks that
benefit the user. In this context the
term application refers to both the
application software and its
implementation.
And since application really means application software, and software is any piece of code that performs any kind of task on a computer, I'd say also a library can be an application.
Most terms are of artificial nature anyway. Is a plugin no application? Is the flash plugin of your browser no application? People say no, it's just a plugin. Why? Because it can't run on it's own, it needs to be loaded into a real process. But there is no definition saying only things that "can run on their own" are applications. Same holds true for a library. The core application could just be an empty container and all logic and functionality, even the interaction with the user, could be performed by plugins or libraries, in which case that would be more an application than the empty container that just provides some context for the application to run. Compare this to Java. A Java application can't run on it's own, it must run within a Java Virtual Machine (JVM), does that mean the JVM is the application and the Java Code is just... well what? Isn't the Java code the real application and the JVM just an empty runtime environment that provides nothing to the end user without the loaded Java code?
I think in this context "application entry point" means "the point at which the application (your code) enters the library".
I think probably what you're referring to is the main() function in C/C++ code or WinMain in a Windows app. That is, it's the point where execution is normally started in an app. Your question is pretty broad and vague--for example, which OS are you running this on--but this may be what you're looking for. This might also address the question.
Bear in mind when you're asking questions, details are your friend. People can give you a much better, more informed answer when you provide them with details.
EDIT:
In a broader context consider what has to happen from the standpoint of the OS. When the user specifies that they want to run an app, the OS has to load the app from the hard drive and then when the app is loaded into memory, it has to pass control to some point in the memory blocked occupied by the newly loaded app to continue execution. That would be the "Application Entry Point". When an app is constructed with dynamically linked code the OS has to load all that dynamically linked code in order to get the correct app image into memory. Loading up those shared bits of code does not change the fact that the OS must have a point to which to pass control when the app is loaded into memory.