iOS Push Notification/Framework issue - objective-c

I've been checking around the net to find some information or source code to achieve this. The thing is, that the Application I am developing uses a custom framework we are also developing. And our customer wants to add Push Notification support for the App.
But he wants the Framework we created to handle the Push Notifications (rather than the App). The following graph will explain a bit better the situation:
The problem is that the AppDelegate seems to be the only handling the Notifications from the OS, so the only solution I can think of is the App forwarding the notifications that come from the OS to the Framework. Any ideas or thoughts? Am I missing something?
Txs in advance

You will have to forward the calls. It should only require a few lines of code added to the appdelegate to interface with your framework though.
You could also do a hacky and more advanced approach where you swamp the IMP's of the AppDelegate methods and forward them through your framework. Just make sure you call the original imp once you're framework has done what it needs to. I wouldn't recommend this approach though as it may not remain stable for future ios versions.
Forwarding the calls is the way to go in my opinion.

Related

Applying Non-Standard Power Assertions & Creating Virtual HIDs

I've got a big ask here, but I am hoping someone might be able to help me. If there's another site you think this should be posted on, please let me know.
I'm the developer of the free app Amphetamine for macOS and I'm hoping to add a new feature to the app - keeping a Mac awake while in closed-display (clamshell) mode while not having a keyboard/mouse/power adapter/display connected to the Mac. I get requests to add this feature on an almost daily basis.
I've been working on a solution (and it's mostly ready) which uses a non-App Store helper app that must be download and installed separately. I could still go with that solution, but I want to explore one more option before pushing the separate app solution out to the world.
An Amphetamine user tipped me off that another app, AntiSleep can keep a Mac awake while in closed-display mode, while not meeting Apple's requirements. I've tested this claim, and it's true. After doing a bit of digging into how AntiSleep might be accomplishing this, I've come up with 2 possible theories so far (though there may be more to it):
In addition to the standard power assertion types, it looks like AntiSleep is using (a) private framework(s) to apply non-standard power assertions. The following non-standard power assertion types are active when AntiSleep is keeping a Mac awake: DenySystemSleep, UserIsActive, RequiresDisplayAudio, & InternalPreventDisplaySleep. I haven't been able to find much information on these power assertion types beyond what appears in IOPMLibPrivate.h. I'm not familiar at all with using private frameworks, but I assume I could theoretically add the IOPMLibPrivate header file to a project and then create these power assertion types. I understand that would likely result in an App Store review rejection for Amphetamine, of course. What about non-App Store apps? Would Apple notarize an app using this? Beyond that, could someone help me confirm that the only way to apply these non-standard power assertions is to use a private framework?
I suspect that AntiSleep may also be creating a virtual keyboard and mouse. Certainly, the idea of creating a virtual keyboard and mouse to get around Apple's requirement of having a keyboard and mouse connected to the Mac when using closed-display mode is an intriguing idea. After doing some searching, I found foohid. However, I ran into all kinds of errors trying to add and use the foohid files in a test project. Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app? I'm not asking for code help with that (yet). I'd just like some help determining whether it might be possible to do.
Thank you in advance for taking a look.
Would Apple notarize an app using this?
I haven't seen any issues with notarising code that uses private APIs. Currently, Apple only seems to use notarisation for scanning for inclusion of known malware.
Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app?
Taking a quick glance at the code of that project, it's clear it implements a kernel extension (kext). Those are not allowed on the App Store.
However, since macOS 10.15 Catalina, there's a new way to write HID drivers, using DriverKit. The idea is that the APIs are very similar to the kernel APIs, although I suspect it'll be a rewrite of the kext as a DriverKit driver, rather than a simple port.
DriverKit drivers are permitted to be included in App Store apps.
I don't know if a DriverKit based HID driver will solve your specific power management issue.
If you go with a DriverKit solution, this will only work on 10.15+.
I suspect that AntiSleep may also be creating a virtual keyboard and mouse.
I haven't looked at AntiSleep, but I do know that in addition to writing an outright HID driver, it's possible to generate HID events using user space APIs such as IOHIDPostEvent(). I don't know if those are allowed on the App Store, but as far as I'm aware, IOKitLib is generally fine.
It's possible you might be able to implement your virtual input device using those.

Appcelerator Hyperloop vs. Plain Titanium Modules

I've started playing around with Appcelerator Hyperloop. While it seems great to access native APIs from JS from day zero, it does raise a few questions about architecture of the platform and the performance.
Currently (AFAIK) a Titanium app has a main UI thread (that runs the native UI controllers) and a JS thread (that runs the JS logic). Each call from JS to Native is passed though the "Bridge" (which is the expansive operation in an app).
Also, Titanium API doesn't cover all the native API and abstracts as much as it can. But if new APIs are introduced it could take time for Appcelerator to implement those into the platform.
One of my favorite things about Titanium is the ability to extend it (using objective-c for iOS and java for Android) - allowing to use native APIs that are not covered by Titanium, and also developing a really native performance controls in case we need to do anything that's too "heavy" for JS. And, as mentioned it's developed 100% native for each platform.
Now that Appcelerator introduced Hyperloop I've done a simple test app and saw that Hyperloop is not translated into native code but just to normal JS code:
var UILabel = require('hyperloop/uikit/uilabel');
var label = new UILabel();
label.text = "HELLO WORLD!";
$.index.add(label);
And another thing about it is that you have to run on the main thread.
So we basically have a few things come to mind here as far as Hyperloop architecture goes:
We still have a bridge? if Hyperloop is JS that calls "special" Hyperloop require then we still have a bridge, that now not only acts as a bridge but also needs to do some sort of reflection (which is also an expansive operation)?
Until now JS ran in it's own thread - so now running in a single main thread seems to be a potential source to more UI blocking operation.
The old-fashioned modules were truly native (not including the bridge call) - so how do Hyperloop-enabled apps compare with those?
There isn't much documentation or articles about Hyperloop that explain the inner working yet - so if anyone has any answers have been trying apps with it could be very helpful.
Answering your questions straight-forward:
There are no Kroll-Proxies involved anymore, since actual classes are being generated on runtime. This is done by using the hyperloop-metabase that does reflection (as you already said) to build an AST that grabs the actual signatures, types, classes, methods, properties, etc.
We did not see any performance-issues with running on the main-thread for now. If you do so, please file a JIRA-ticket so we can investigate the use-case.
The old-modules were "less native" then now, simply because they were all wrapped by the Kroll-proxy (by extending every view from TiUIView and every proxy from TiProxy / TiViewProxy. Hyperloop does not work with those, making the module-development much more faster by also allowing the developer to test his/her process live in their app without the need of packaging and referencing the module manually. Hyperloop modules are nothing else then CommonJS modules that are already used frequently across Alloy and other Ti-components.
I hope that gives you a quick overview on how Hyperloop works. If you have further questions, let us know!
Hans
(As a detailed answer to the above comment)
So let's say you have a tableview in iOS. The native class is UITableView and the Titanium-API is Ti.UI.TableView / Ti.UI.ListView.
While the ListView already provides a huge performance-boost compared to the TableView by abstracting the Child-API usage to templates, those child-API's (Ti.UI.Label, Ti.UI.ImageView, ...) are still custom classes that are wrapped and provide custom logic (!) e.g. keeping track of it's parent-references, internal data-structures and locks to jump between the threads.
If you now check the Hyperloop example of a native UITableView, you access the native API's directly, so no proxy behind it needs to manage sections, templates, items etc. Of course we deliver that API through a kroll proxy in order to display it in Titanium, but you don't "jump between the bridge" with every call you make from the SDK.
The easiest way to see that is to actually run some bigger example like the tableview, collectionview and view-animation. If you do a fast scroll through these, you already feel the performance boost compared to "classic" Titanium API's, simply because the only communication between your proxy and (like a Ti.UI.Window you want to add it to) is the .add() to receive the native API of the type HyperloopClass.
Finally, of course it still makes sense to use Ti.UI.ListView for example, because it comes with the builtin utilities that Titanium devs love (events, easy configuration and layout-handling). But thats also where the benefit of Hyperloop comes along, by allowing the developer to access those API's him-/herself.
I hope that helps a bit more to understand it.

sending data between tweaks

I have a tweak with hooks to an app (tweak1)
The tweak is supposed to use a framework to execute some code.
Unfortunately within iOS7 I'm unable to do that.
However when the same code is executed in a separate tweak (tweak2) with hooks to springBoard, it runs just fine.
My question is possible for me to send a dictionary from the first tweak (tweak1) to tweak2 so it gets executed.
I think I need to use CPDistributedNotificationCenter. But not sure.
If that's the case, a helping suggestions or example would be greatly appreciated.
many thanks
CPDistributedNotificationCenter should work or you could just use NSDistributedNotificationCenter. It inherits from NSNotificationCenter, which we all know how to use.
Another solution I can suggest is CFMessagePort, which I'm using in my apps. I need to support iOS 4, which doesn't support NSDistributedNotificationCenter, so I ended up using CFMessagePort. It differs from notification model in that you can't send messages to everyone. You can only send messages between two known ports. But in your case it probably doesn't matter.
There is also the XPC API but I've never used it and can't say much about it. It's an IPC API so it should work. Many iOS components use it.

API for intercepting text input in another app?

I was wanting to make an app similar to something like TextExpander, but I am not sure how you would intercept the text. As far as I can tell, I need to start with NSAccessability. Could anyone share some snippets, or at least point me in the right direction?
First off, you should be aware that, because of the sandboxing requirement, this isn't possible at all if you want to sell your app in the App Store.
If you don't intend to sandbox your app, you can use the NSEvent class method addGlobalMonitorForEventsMatchingMask: to create a global key event handler that gets called when keys are pressed in other apps (but not your own app, use addLocalMonitor... for that).
To actually insert snippets, like TextExpander, there are several ways. You can use the accessibility APIs, but that requires that the app(s) you're targeting support accessibility, which isn't always the case.
Another option is to use the Quartz Event Services (CGEvent) APIs which provide (among other things) a low-level method to simulate key events.
Edit: Nevermind. You're asking about Mac OS. I thought you were asking about iOS.
You should look at how TextExpander is used by other apps. The target app has to build in support for TE by making an object provided by TE a delegate of the text field. You can't run your code in someone else's app. They have to compile your code into their app. That's why there's a TextExpander SDK.
Once the TextExpander code is in the target app, the text field delegate gets the shared snippets by looking for snippets put into a shared pasteboard.

is it possible to design UI for iOS in html but running logic in objective-c

Im new to iPhone development. I come from web-development/design and what bothers me in iPhone development is the unability for custom design... In other words... is there any way I could design my user interface (my view) with webkit so all my UI elemnts would be writen in html/css and logic in objective-c. I was thinking if there is a way I could triger some objective-c code when html button is pressed. Is there any way to do that (lets say via "localhost request" or I dont know) ?
BUT not with phonegap or similar, because then you just start writing your logic in javascript and i dont want that... I want my controller and model to be written in objective-c just the view module in html!
Look into PhoneGap. It has all of the abilities you specify, and can work on Android and other's too (of course you'd need to write the native logic in Java and other languages for that to work though).
The other responders are correct in that efficiency is a big concern, as all of your logic is gated on the Web/native interface. The UIWebView performs a LOT of it's duties on a single thread, and any of your interactions with it must as well.
If you really want to do this, however, there are solutions... Intercepting events from the UIWebView via delegate methods (such as shouldStartLoadWithRequest) that follow a protocol that you define can be used, but I'd suggest either doing everything in JS (trust me, it will actually be faster), or just bite the bullet and learn native iOS development more.