Can we perform automation testing on voice overs? - automation

I want to validate what voice over says on my app
Can i automate voice overs or it can be done only manually?

1/. Use Unit Tests to check every property of your accessible elements (label, hint, trait...).
2/. Take a look at the DEQUE solution dedicated to iOS.
3/. The GTXiLib open source framework is also a good solution even if it's written in ObjC. :o)
However, UI Testing (Xcode 10, iOS 12) is off the table because custom actions for instance are unreachable.
Take a look at this answer given by an Apple engineer during the WWDC 2019.

Related

Applying Non-Standard Power Assertions & Creating Virtual HIDs

I've got a big ask here, but I am hoping someone might be able to help me. If there's another site you think this should be posted on, please let me know.
I'm the developer of the free app Amphetamine for macOS and I'm hoping to add a new feature to the app - keeping a Mac awake while in closed-display (clamshell) mode while not having a keyboard/mouse/power adapter/display connected to the Mac. I get requests to add this feature on an almost daily basis.
I've been working on a solution (and it's mostly ready) which uses a non-App Store helper app that must be download and installed separately. I could still go with that solution, but I want to explore one more option before pushing the separate app solution out to the world.
An Amphetamine user tipped me off that another app, AntiSleep can keep a Mac awake while in closed-display mode, while not meeting Apple's requirements. I've tested this claim, and it's true. After doing a bit of digging into how AntiSleep might be accomplishing this, I've come up with 2 possible theories so far (though there may be more to it):
In addition to the standard power assertion types, it looks like AntiSleep is using (a) private framework(s) to apply non-standard power assertions. The following non-standard power assertion types are active when AntiSleep is keeping a Mac awake: DenySystemSleep, UserIsActive, RequiresDisplayAudio, & InternalPreventDisplaySleep. I haven't been able to find much information on these power assertion types beyond what appears in IOPMLibPrivate.h. I'm not familiar at all with using private frameworks, but I assume I could theoretically add the IOPMLibPrivate header file to a project and then create these power assertion types. I understand that would likely result in an App Store review rejection for Amphetamine, of course. What about non-App Store apps? Would Apple notarize an app using this? Beyond that, could someone help me confirm that the only way to apply these non-standard power assertions is to use a private framework?
I suspect that AntiSleep may also be creating a virtual keyboard and mouse. Certainly, the idea of creating a virtual keyboard and mouse to get around Apple's requirement of having a keyboard and mouse connected to the Mac when using closed-display mode is an intriguing idea. After doing some searching, I found foohid. However, I ran into all kinds of errors trying to add and use the foohid files in a test project. Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app? I'm not asking for code help with that (yet). I'd just like some help determining whether it might be possible to do.
Thank you in advance for taking a look.
Would Apple notarize an app using this?
I haven't seen any issues with notarising code that uses private APIs. Currently, Apple only seems to use notarisation for scanning for inclusion of known malware.
Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app?
Taking a quick glance at the code of that project, it's clear it implements a kernel extension (kext). Those are not allowed on the App Store.
However, since macOS 10.15 Catalina, there's a new way to write HID drivers, using DriverKit. The idea is that the APIs are very similar to the kernel APIs, although I suspect it'll be a rewrite of the kext as a DriverKit driver, rather than a simple port.
DriverKit drivers are permitted to be included in App Store apps.
I don't know if a DriverKit based HID driver will solve your specific power management issue.
If you go with a DriverKit solution, this will only work on 10.15+.
I suspect that AntiSleep may also be creating a virtual keyboard and mouse.
I haven't looked at AntiSleep, but I do know that in addition to writing an outright HID driver, it's possible to generate HID events using user space APIs such as IOHIDPostEvent(). I don't know if those are allowed on the App Store, but as far as I'm aware, IOKitLib is generally fine.
It's possible you might be able to implement your virtual input device using those.

Appcelerator Hyperloop vs. Plain Titanium Modules

I've started playing around with Appcelerator Hyperloop. While it seems great to access native APIs from JS from day zero, it does raise a few questions about architecture of the platform and the performance.
Currently (AFAIK) a Titanium app has a main UI thread (that runs the native UI controllers) and a JS thread (that runs the JS logic). Each call from JS to Native is passed though the "Bridge" (which is the expansive operation in an app).
Also, Titanium API doesn't cover all the native API and abstracts as much as it can. But if new APIs are introduced it could take time for Appcelerator to implement those into the platform.
One of my favorite things about Titanium is the ability to extend it (using objective-c for iOS and java for Android) - allowing to use native APIs that are not covered by Titanium, and also developing a really native performance controls in case we need to do anything that's too "heavy" for JS. And, as mentioned it's developed 100% native for each platform.
Now that Appcelerator introduced Hyperloop I've done a simple test app and saw that Hyperloop is not translated into native code but just to normal JS code:
var UILabel = require('hyperloop/uikit/uilabel');
var label = new UILabel();
label.text = "HELLO WORLD!";
$.index.add(label);
And another thing about it is that you have to run on the main thread.
So we basically have a few things come to mind here as far as Hyperloop architecture goes:
We still have a bridge? if Hyperloop is JS that calls "special" Hyperloop require then we still have a bridge, that now not only acts as a bridge but also needs to do some sort of reflection (which is also an expansive operation)?
Until now JS ran in it's own thread - so now running in a single main thread seems to be a potential source to more UI blocking operation.
The old-fashioned modules were truly native (not including the bridge call) - so how do Hyperloop-enabled apps compare with those?
There isn't much documentation or articles about Hyperloop that explain the inner working yet - so if anyone has any answers have been trying apps with it could be very helpful.
Answering your questions straight-forward:
There are no Kroll-Proxies involved anymore, since actual classes are being generated on runtime. This is done by using the hyperloop-metabase that does reflection (as you already said) to build an AST that grabs the actual signatures, types, classes, methods, properties, etc.
We did not see any performance-issues with running on the main-thread for now. If you do so, please file a JIRA-ticket so we can investigate the use-case.
The old-modules were "less native" then now, simply because they were all wrapped by the Kroll-proxy (by extending every view from TiUIView and every proxy from TiProxy / TiViewProxy. Hyperloop does not work with those, making the module-development much more faster by also allowing the developer to test his/her process live in their app without the need of packaging and referencing the module manually. Hyperloop modules are nothing else then CommonJS modules that are already used frequently across Alloy and other Ti-components.
I hope that gives you a quick overview on how Hyperloop works. If you have further questions, let us know!
Hans
(As a detailed answer to the above comment)
So let's say you have a tableview in iOS. The native class is UITableView and the Titanium-API is Ti.UI.TableView / Ti.UI.ListView.
While the ListView already provides a huge performance-boost compared to the TableView by abstracting the Child-API usage to templates, those child-API's (Ti.UI.Label, Ti.UI.ImageView, ...) are still custom classes that are wrapped and provide custom logic (!) e.g. keeping track of it's parent-references, internal data-structures and locks to jump between the threads.
If you now check the Hyperloop example of a native UITableView, you access the native API's directly, so no proxy behind it needs to manage sections, templates, items etc. Of course we deliver that API through a kroll proxy in order to display it in Titanium, but you don't "jump between the bridge" with every call you make from the SDK.
The easiest way to see that is to actually run some bigger example like the tableview, collectionview and view-animation. If you do a fast scroll through these, you already feel the performance boost compared to "classic" Titanium API's, simply because the only communication between your proxy and (like a Ti.UI.Window you want to add it to) is the .add() to receive the native API of the type HyperloopClass.
Finally, of course it still makes sense to use Ti.UI.ListView for example, because it comes with the builtin utilities that Titanium devs love (events, easy configuration and layout-handling). But thats also where the benefit of Hyperloop comes along, by allowing the developer to access those API's him-/herself.
I hope that helps a bit more to understand it.

GWT & MVP in order to deliver BOTH Native (Android+ObjC) & HTML5 Mobile Apps?

So GWT best practices encourages one to use some flavour of MVP, which should in theory allow one to write different native views while sharing the presenter business logic.
This seems to be at the heart of the GWT spin off Google project http://code.google.com/p/j2objc/ which converts the non-UI part of your code to Objective-C, allowing you to write the rest natively in Objective-C.
So my question is: If this really hard part of the puzzle is being solved, how hard would it be to include an HTML5 mobile library (like MGWT or Touch4j [Sencha]) into this MVP pipeline to have the best of all worlds?
Having dabbled with http://code.google.com/p/playn/ , this clearly seems to be the blue-print for having a cross-plaftform build system (native android & html5 & java &...), but that project is geared for single screen drawing and event loop for game dynamics and doesn't allow for keyboard input and other typical mobile goodies.
It seems a shame that if so much of the problem has been solved, that it's not possible to go the extra mile. The answer to this question would be the best plan for actioning a solution, including such nigglies as which MVP structure to choose that would ease accommodation of the various widget libraries (GWTP vs MVP 2.1), and if the best approach is to start with the PlayN code base, and start to hack it.. what are the gotchas? Or if another path is chosen, why that one? and why would it be the best??
Thanx a lot. :-)
It is not clear whether your question is - evaluation options for multi-platform app development or mvp.
You can evaluate additional technology which are used with Sencha and GWT
1) mgwt
2) titanium
3) phonegap
You can also reference - Creating a mobile app using Google App Engine and GWT?
Note: PlayN as you mention is more of gaming platform and not suitable for business app.
MVP is definitely doable... and at times you may feel like its a lot of work, but it pays off in the end. Check out the Touch4j Kitchen Sink, which is written using MVP. You can take that down to the device with Cordova if you wish. The code is on GitHub:
https://github.com/emitrom/touch4jks
The repo is actively being worked on (we are updating ourselves to Touch4j 4.0) so it won't run out the gate, but at least you can see and follow the model :-)
Titanium4j is to Appcelerator's Titanium as Touch4j is to Sencha Touch. You may want to check that out as well. Titanium4j and Touch4j rely on GWT.
Cheers.

Objective-C playground?

Is there any sort of Mac app, Web app, or others like JSFiddle for Objective-C/Cocoa purposes?
It's not entirely the same, but look into F-Script: http://www.fscript.org/
It lets you rapid-prototype and experiment. You can also hook it into existing apps very easily. It has been invaluable for me for certain types of UI debugging.
I've also found CodeRunner to be quite handy for boilerplate app generation and one-click console running to try language snippets out. Available on the AppStore at a price.
I created playgrounds for Objective-C on top of code injection, so you can experiment with normal iOS simulator, it's open source on GitHub
Video showing them in action

Is it possible to record screen with Titanium / Appcelerator?

We're in process of developing a desktop application which needs to record user's screen once he clicks a button. I read a tutorial about Adobe AIR, which says it is easy to do with AIR: http://www.adobe.com/devnet/air/flex/articles/air_screenrecording.html
But our preference is Titanium as we've explored it a little bit. So I want to know is that even possible? If yes, how can we get started with?
There's also an interesting solution which uses Java applet for recording, as demonstrated here: http://www.screencast-o-matic.com/create?step=info&sid=default&itype=choose
But again, we're not sure about JAVA and would like to know how can it be done? or if its even possible to run a JAVA applet in Titanium?
When you say "record screen", I'm assuming you mean video. Correct?
The only way to do this in Titanium Desktop right now is to take a bunch of screenshots and string them together (encoding would probably need to be done server-side).
Depending on how long your videos need to be, this probably won't work for you. I'm also not confident in how quickly you could capture screenshots, and if it would have a high enough frame rate to be usable.
Past that, a module could be developed for Desktop to support some native APIs to record video. That's not something I see on the horizon, though.
I hope this helps, albiet a rather dismal answer. -Dawson