is it possible to design UI for iOS in html but running logic in objective-c - objective-c

Im new to iPhone development. I come from web-development/design and what bothers me in iPhone development is the unability for custom design... In other words... is there any way I could design my user interface (my view) with webkit so all my UI elemnts would be writen in html/css and logic in objective-c. I was thinking if there is a way I could triger some objective-c code when html button is pressed. Is there any way to do that (lets say via "localhost request" or I dont know) ?
BUT not with phonegap or similar, because then you just start writing your logic in javascript and i dont want that... I want my controller and model to be written in objective-c just the view module in html!

Look into PhoneGap. It has all of the abilities you specify, and can work on Android and other's too (of course you'd need to write the native logic in Java and other languages for that to work though).

The other responders are correct in that efficiency is a big concern, as all of your logic is gated on the Web/native interface. The UIWebView performs a LOT of it's duties on a single thread, and any of your interactions with it must as well.
If you really want to do this, however, there are solutions... Intercepting events from the UIWebView via delegate methods (such as shouldStartLoadWithRequest) that follow a protocol that you define can be used, but I'd suggest either doing everything in JS (trust me, it will actually be faster), or just bite the bullet and learn native iOS development more.

Related

Appcelerator Hyperloop vs. Plain Titanium Modules

I've started playing around with Appcelerator Hyperloop. While it seems great to access native APIs from JS from day zero, it does raise a few questions about architecture of the platform and the performance.
Currently (AFAIK) a Titanium app has a main UI thread (that runs the native UI controllers) and a JS thread (that runs the JS logic). Each call from JS to Native is passed though the "Bridge" (which is the expansive operation in an app).
Also, Titanium API doesn't cover all the native API and abstracts as much as it can. But if new APIs are introduced it could take time for Appcelerator to implement those into the platform.
One of my favorite things about Titanium is the ability to extend it (using objective-c for iOS and java for Android) - allowing to use native APIs that are not covered by Titanium, and also developing a really native performance controls in case we need to do anything that's too "heavy" for JS. And, as mentioned it's developed 100% native for each platform.
Now that Appcelerator introduced Hyperloop I've done a simple test app and saw that Hyperloop is not translated into native code but just to normal JS code:
var UILabel = require('hyperloop/uikit/uilabel');
var label = new UILabel();
label.text = "HELLO WORLD!";
$.index.add(label);
And another thing about it is that you have to run on the main thread.
So we basically have a few things come to mind here as far as Hyperloop architecture goes:
We still have a bridge? if Hyperloop is JS that calls "special" Hyperloop require then we still have a bridge, that now not only acts as a bridge but also needs to do some sort of reflection (which is also an expansive operation)?
Until now JS ran in it's own thread - so now running in a single main thread seems to be a potential source to more UI blocking operation.
The old-fashioned modules were truly native (not including the bridge call) - so how do Hyperloop-enabled apps compare with those?
There isn't much documentation or articles about Hyperloop that explain the inner working yet - so if anyone has any answers have been trying apps with it could be very helpful.
Answering your questions straight-forward:
There are no Kroll-Proxies involved anymore, since actual classes are being generated on runtime. This is done by using the hyperloop-metabase that does reflection (as you already said) to build an AST that grabs the actual signatures, types, classes, methods, properties, etc.
We did not see any performance-issues with running on the main-thread for now. If you do so, please file a JIRA-ticket so we can investigate the use-case.
The old-modules were "less native" then now, simply because they were all wrapped by the Kroll-proxy (by extending every view from TiUIView and every proxy from TiProxy / TiViewProxy. Hyperloop does not work with those, making the module-development much more faster by also allowing the developer to test his/her process live in their app without the need of packaging and referencing the module manually. Hyperloop modules are nothing else then CommonJS modules that are already used frequently across Alloy and other Ti-components.
I hope that gives you a quick overview on how Hyperloop works. If you have further questions, let us know!
Hans
(As a detailed answer to the above comment)
So let's say you have a tableview in iOS. The native class is UITableView and the Titanium-API is Ti.UI.TableView / Ti.UI.ListView.
While the ListView already provides a huge performance-boost compared to the TableView by abstracting the Child-API usage to templates, those child-API's (Ti.UI.Label, Ti.UI.ImageView, ...) are still custom classes that are wrapped and provide custom logic (!) e.g. keeping track of it's parent-references, internal data-structures and locks to jump between the threads.
If you now check the Hyperloop example of a native UITableView, you access the native API's directly, so no proxy behind it needs to manage sections, templates, items etc. Of course we deliver that API through a kroll proxy in order to display it in Titanium, but you don't "jump between the bridge" with every call you make from the SDK.
The easiest way to see that is to actually run some bigger example like the tableview, collectionview and view-animation. If you do a fast scroll through these, you already feel the performance boost compared to "classic" Titanium API's, simply because the only communication between your proxy and (like a Ti.UI.Window you want to add it to) is the .add() to receive the native API of the type HyperloopClass.
Finally, of course it still makes sense to use Ti.UI.ListView for example, because it comes with the builtin utilities that Titanium devs love (events, easy configuration and layout-handling). But thats also where the benefit of Hyperloop comes along, by allowing the developer to access those API's him-/herself.
I hope that helps a bit more to understand it.

API for intercepting text input in another app?

I was wanting to make an app similar to something like TextExpander, but I am not sure how you would intercept the text. As far as I can tell, I need to start with NSAccessability. Could anyone share some snippets, or at least point me in the right direction?
First off, you should be aware that, because of the sandboxing requirement, this isn't possible at all if you want to sell your app in the App Store.
If you don't intend to sandbox your app, you can use the NSEvent class method addGlobalMonitorForEventsMatchingMask: to create a global key event handler that gets called when keys are pressed in other apps (but not your own app, use addLocalMonitor... for that).
To actually insert snippets, like TextExpander, there are several ways. You can use the accessibility APIs, but that requires that the app(s) you're targeting support accessibility, which isn't always the case.
Another option is to use the Quartz Event Services (CGEvent) APIs which provide (among other things) a low-level method to simulate key events.
Edit: Nevermind. You're asking about Mac OS. I thought you were asking about iOS.
You should look at how TextExpander is used by other apps. The target app has to build in support for TE by making an object provided by TE a delegate of the text field. You can't run your code in someone else's app. They have to compile your code into their app. That's why there's a TextExpander SDK.
Once the TextExpander code is in the target app, the text field delegate gets the shared snippets by looking for snippets put into a shared pasteboard.

Phonegap iOS custom context menu

I'm developing a phonegap app for iOS, an would like to insert a new option in the context menu that shows up after selecting text. I had a look to several post which are quite related, and almost all of them led to this link. I followed the instructions of that example, but couldn't make it work, due to my lack of experience with Objective-c. Even so, I'm not sure if that one is a valid solution using Cordova/Phonegap, at least without hacking the framework.
Has somebody accomplished this task?

Programming for IPhone - Do you really need to learn Objective-c?

Looking on-line I saw that I can write most of the application in Ansi-C code or as a website and present it in a webView control.
Then besides some general knowledge about iOS and the API... Do I really need to learn Objective C?
You could use something like PhoneGap, which wraps an HTML-based application into a native launcher app. It may not be as powerful as what you can do with a pure native app, but on the other hand, your code will not only run on iOS.
PhoneGap does offer access to some of the phone's API (camera, notifications, accelerometer and so on) that you normally only get in native apps (it exposes them as JavaScript objects), so you can do more than you could in a regular HTML5 webapp, even without learning Objective-C.
Most people overlook the fact the iPhone has an extremely capable web browser. You can create very powerful web apps and therefore avoid having to learn objective C.
Safari on the iPhone has a bunch of great HTML5 features, including local sqlite stoage - so for example you could easily make a todo list app which could sync up with your server when there's a net connection.
You can even add home screen icons etc.. personally I'm astonished people don't write iPhone web apps more!
This is a super useful guide on how to do it:
http://building-iphone-apps.labs.oreilly.com/
You can use C# to write iPhone apps using MonoTouch, but it costs money. Then again, so does developing for the iPhone the normal way.
The other answers are correct in that you /can/ use other languages... you really don't want to. You are never going to create a pleasant to use, standard, and HIG-abiding application without learning Objective-C. Truly, though, there's no reason /not/ to learn something new. It's not particularly difficult (like, say, C++), and Cocoa is a well-designed API.
Somewhat related, I personally refuse to install all the PhoneGap/etc apps in the App Store as I find them of significantly less inherent quality (especially as compared to the rest of the apps on the device), and I would suspect many non-developers would have similar issues with them, if not so specific.
Unless your app is all web, or uses a framework such as PhoneGap you have to have some working knowledge of obj-C. It's actually not that bad. It's C with Smalltalk bolted onto it.
It's generally much simpler than C++.
if u want true native app that can take advantage of the latest features on the latest iOS release, Objective-C is da language you gotta learn.
Objective-C is a very powerful language, and there are a ton of great frameworks - you are doing yourself a HUGE disservice by not learning the language, and your app quality will suffer as a result.
You can write an entire iPhone app in C++ using a framework like libnui.

How do I create Cocoa interfaces without Interface Builder?

I would prefer to create my interfaces programatically. Seems as if all the docs on Apple Developer assume you're using Interface Builder. Is it possible to create these interfaces programatically, and if so where do I start learning about how to do this
I thought the relevant document for this, if possible would be in this section: http://developer.apple.com/referencelibrary/Cocoa/idxUserExperience-date.html
I like the question, and I'd also like to know of resources for going IB-less. Usefulness (the "why") is limited only by imagination. Off the top of my head, here are some possible reasons to program UIs explicitly:
Implementing a better Interface Builder.
Programming dynamic UIs, i.e., ones whose structure is not knowable statically (at compile/xcode time).
Implementing the Cocoa back-end of a cross-platform library or language for UIs.
There is a series of blog posts on working without a nib and a recent description by Michael Mucha on cocoa-dev.
I would prefer to create my interfaces programatically.
Why? Interface Builder is easier and faster. You can't write a typo by drag and drop, and you don't get those oh-so-handy Aqua guides when you're typing rectangles by hand.
Don't fight it. Interface Builder is your friend. Let it help you.
If you insist on wasting your own time and energy by writing your UI in code:
Not document-based (generally library-based, like Mail, iTunes, iPhoto): Create a subclass of NSObject, instantiate it, and make it the application's delegate, and in the delegate's applicationDidFinishLaunching: method, create a window, populate it with views, and order it front.
Document-based (like TextEdit, Preview, QuickTime Player): In the makeWindowControllers method in your subclass of NSDocument, create your windows (and populate them with views) and create window controllers for them, making sure to send yourself addWindowController: for each window controller.
As a completely blind developer I can say that IB is not compatible with VoiceOver (the built-in screen-reader on OS X).
This means that without access to robust documentation on using Cocoa without IB I cannot develop apps for OS X / iPhone in Cocoa, which means I (ironically) cannot easily develop apps that are accessible to the blind (and all others) on OS X / iOS.
My current solution, which I would prefer not to use, is Java + SWT, of course this works for OS X, not so much for iOS.
In fact IB becomes totally unusefull when you start to write your own UI classes. Let say that you create your own button that use an skin system based on a plist. Or you create an dinamic toolbar that load and unload items based on user selection.
IB doesn't accept custom UI elements, so more complex UI can't use him. And YES you will want to do more complex things that the UIKit gives you.
Though this is quiet a bit old...
I tried many times to do everything only with programmatically. This is hard, but possible.
Update:
I posted another question for this specific issue: View-based NSOutlineView without NIB?, and now
I believe everything can be done in programmatical way, but it's incredibly hard without consulting from Apple engineers due to lack of information or examples.
Below argument might be off-topic, but I like to note why I strongly prefer programmatically way.
I also prefer programmatic way. Because
Static layout tool cannot handle anything dynamic.
Reproducing same UI state across multiple NIBs is hard. Everything is implicit or hidden. You need to visit all the panels to find parameters. This kind of job is very easy to make mistake - mistake friendly.
Managing consistent state is hard. Because reproducing same look is hard.
Automation impossible. You cannot make auto-generated input form.
Parameter indirection - such as variable element size chosen by user - is not possible.
Aiming small point is a lot harder than hitting finger sized keys at fixed location - funny that this is serious usability issue for developers!
IB sometimes screws. Which means it's compilable, and still working, but when I open the source, it looks broken and extra editing becomes impossible. (you may not experienced this yet, but if XIB file goes complex, this must happen)
It's image based serialization. The concept is good. But the problem is image-base only. IB doesn't keep the source code for clean boot by replaying the source code. Clean boot is very important to guarantee specific running state. Also, we cannot fix the bugs in source-code. Bug s just will be stacked infinitely. This is core reason why we cannot reproduce the equal(not similar looking) UI state in IB.
Of course these stuffs can be solved by post-processing NIB UI, but if we have to configure everything again, there's no reason to use IB at first.
With text code, it's easy to reproducing the same state - just copy the code. Also easy to inspecting and fixing wrong part - because we have full control. But in IB, we have no control on hard-core details.
IB can't be ultimate solution. It's like a Photoshop, but even Photoshop offers text-based scripting facility. GUI is a moving program, and not a static image or graphic. An IB approach is completely wrong even for visual editing of GUI. If you're one of the Apple folks reading this, I beg you to remove whole dependency to IB completely ASAP.