iOS augmented reality library with simple "events" system - objective-c

I'm looking for an iOS augmented reality library that will enable me to track a marker and either
Show the user a video (local or YouTube, full screen or mapped to the marker).
Load a regular old fashion view controller with my own code in it (like a UITableViewController).
I've been looking around and all augmented libraries I've seen seem overly complex (for what I want to do).
Do you know of any lightweight library that will allow me to do this? Paid libraries are not a problem.
This is what I've looked at:
Vuforia
String
Popcode
Metaio
3DAR
Mixare
Thanks!
PD: I don't know how to program with Unity and a few of what look like the most promising libraries use this so those are not an option for me. I'd prefer straight up Objective-C inside XCode.

Vuforia now has a video solution included with their Unity SDK if you are able to use Unity. http://u3d.as/content/qualcomm-inc-/vuforia-video-playback/36v

Related

Is React Native suitable for building an OpenGL-accelerated 2D-game?

Say I wanted to build something like a 2D side-scroller game. Would React Native be suitable performance-wise? E.g., can I use OpenGL-acceleration for it? Or would it probably be slower than just using WebGL and HTML5?
Researched some more and came up with this information:
Apparently there is a GLView which holds a WebGL context:
https://docs.expo.io/versions/latest/sdk/gl-view.html
On that page it says this:
Any WebGL-supporting library that expects a WebGLRenderingContext
could be used. Some times such libraries assume a web JavaScript
context (such as assuming document). Usually this is for resource
loading or event handling, with the main rendering logic still only
using pure WebGL. So these libraries can usually still be used with a
couple workarounds. The Expo-specific integrations above include
workarounds for some popular libraries.
Also a Twitter comment from Expo which mentions 'games' specifically:
Expo Graphics gives you the power of GL combined with Expo+React Native. It
is the foundation for image filters, games, and special effects.
And there should be a demo here:
https://github.com/gre/gl-react
Not much projects listed there which use React Native to build a game. Still, there being a WebGL context interface to a native OpenGL acceleration gives rise to hope.
I've used react-native-webgl to build a minesweeper game. This library has provided the performance gain I needed to render a 16x30 grid of cells with quick transitions from one state to another. In some circumstances the game needs to re-render dozens or even hundreds of cells at once. Default React Native renderer is not fast enough to do that without user noticing the delay.
Note that while react-native-webgl solves the performance problem, it requires you to write low level code such as creating shaders, manage the vertices etc. And I haven't found libraries built on top of react-native-webgl that would work for my task.
So if you really need or want to use React Native for your game, use react-native-webgl or GLView for Expo. Otherwise use a different technology like Unity.
You can find the source code of my game here.

Appcelerator Hyperloop vs. Plain Titanium Modules

I've started playing around with Appcelerator Hyperloop. While it seems great to access native APIs from JS from day zero, it does raise a few questions about architecture of the platform and the performance.
Currently (AFAIK) a Titanium app has a main UI thread (that runs the native UI controllers) and a JS thread (that runs the JS logic). Each call from JS to Native is passed though the "Bridge" (which is the expansive operation in an app).
Also, Titanium API doesn't cover all the native API and abstracts as much as it can. But if new APIs are introduced it could take time for Appcelerator to implement those into the platform.
One of my favorite things about Titanium is the ability to extend it (using objective-c for iOS and java for Android) - allowing to use native APIs that are not covered by Titanium, and also developing a really native performance controls in case we need to do anything that's too "heavy" for JS. And, as mentioned it's developed 100% native for each platform.
Now that Appcelerator introduced Hyperloop I've done a simple test app and saw that Hyperloop is not translated into native code but just to normal JS code:
var UILabel = require('hyperloop/uikit/uilabel');
var label = new UILabel();
label.text = "HELLO WORLD!";
$.index.add(label);
And another thing about it is that you have to run on the main thread.
So we basically have a few things come to mind here as far as Hyperloop architecture goes:
We still have a bridge? if Hyperloop is JS that calls "special" Hyperloop require then we still have a bridge, that now not only acts as a bridge but also needs to do some sort of reflection (which is also an expansive operation)?
Until now JS ran in it's own thread - so now running in a single main thread seems to be a potential source to more UI blocking operation.
The old-fashioned modules were truly native (not including the bridge call) - so how do Hyperloop-enabled apps compare with those?
There isn't much documentation or articles about Hyperloop that explain the inner working yet - so if anyone has any answers have been trying apps with it could be very helpful.
Answering your questions straight-forward:
There are no Kroll-Proxies involved anymore, since actual classes are being generated on runtime. This is done by using the hyperloop-metabase that does reflection (as you already said) to build an AST that grabs the actual signatures, types, classes, methods, properties, etc.
We did not see any performance-issues with running on the main-thread for now. If you do so, please file a JIRA-ticket so we can investigate the use-case.
The old-modules were "less native" then now, simply because they were all wrapped by the Kroll-proxy (by extending every view from TiUIView and every proxy from TiProxy / TiViewProxy. Hyperloop does not work with those, making the module-development much more faster by also allowing the developer to test his/her process live in their app without the need of packaging and referencing the module manually. Hyperloop modules are nothing else then CommonJS modules that are already used frequently across Alloy and other Ti-components.
I hope that gives you a quick overview on how Hyperloop works. If you have further questions, let us know!
Hans
(As a detailed answer to the above comment)
So let's say you have a tableview in iOS. The native class is UITableView and the Titanium-API is Ti.UI.TableView / Ti.UI.ListView.
While the ListView already provides a huge performance-boost compared to the TableView by abstracting the Child-API usage to templates, those child-API's (Ti.UI.Label, Ti.UI.ImageView, ...) are still custom classes that are wrapped and provide custom logic (!) e.g. keeping track of it's parent-references, internal data-structures and locks to jump between the threads.
If you now check the Hyperloop example of a native UITableView, you access the native API's directly, so no proxy behind it needs to manage sections, templates, items etc. Of course we deliver that API through a kroll proxy in order to display it in Titanium, but you don't "jump between the bridge" with every call you make from the SDK.
The easiest way to see that is to actually run some bigger example like the tableview, collectionview and view-animation. If you do a fast scroll through these, you already feel the performance boost compared to "classic" Titanium API's, simply because the only communication between your proxy and (like a Ti.UI.Window you want to add it to) is the .add() to receive the native API of the type HyperloopClass.
Finally, of course it still makes sense to use Ti.UI.ListView for example, because it comes with the builtin utilities that Titanium devs love (events, easy configuration and layout-handling). But thats also where the benefit of Hyperloop comes along, by allowing the developer to access those API's him-/herself.
I hope that helps a bit more to understand it.

xamarin get location at cross-platform

i want to get location (use gps) at xamarin.form - that is, in cross platform.
but i can't find. only platform-dependent (at android, at ios, etc.)
if you know, please share to me !
(i found xamarin.mobile - geolocation, but it is also platform-dependent T^T)
This is going to be device specific. Probably the best approach is to create an Interface in your portable class library and then implement the interface in your Android and iOS -specific projects. The PCL will connect to the implementation through the Xamarin Forms DependencyService. Please have a look at the following link Accessing Native Features via the DependencyService
It is likely that you will be able to use the other examples on the Xamarin site to write your platform-specific code. For example here is a link to the Android LocationService
Checkout Forms Labs. It should be pretty simple to reuse it even without Xamarin.Forms (if that's the case).

Objective-C in Mono

I have a .NET application, which I want to port to OSX. Up to now I used a DirectShow DLL for WebCam handling. Can I use an Objective-C DLL for Mono? How? I'm a newbie on Mac. Is there an existing (WebCam handling) solution for this? Is there a better solution?
You want to use the QTKit framework to do this, in particular you can use the QTCaptureView as a reusable NSView that you can embed in an existing window or in an application to do the actual video capturing.
I have just added support for capturing to the MonoMac bindings a few minutes ago after I saw your question, so you will need to do a little bit of work.
Steps:
Install Mono, MonoDevelop and the MonoMac addin as described here: http://mono-project.com/MonoMac
Download the latest sources for MonoMac and MacCore from Github: http://github.com/mono/maccore and http://github.com/mono/monomac
Update the MonoMac.dll to the latest version, by going into the monomac/src directory and typing "make update"
At this point you should be able to use the QTCaptureView in your MonoMac applications like any other NSView. A tutorial showing the use of the API in Objective-C is here:
http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/QTKitApplicationTutorial/BuildingaSimpleQTKitCaptureApplication/BuildingaSimpleQTKitCaptureApplication.html#//apple_ref/doc/uid/TP40008155-CH8-SW1
You can just use the equivalent versions in C#
I'm not sure what you mean by "an object-c dll for Mono".
Your absolute best approach is to learn the platform you're targeting and port only the logic and general architecture.
To access cameras, microphones, line-ins, etc. on Mac OS X, use QTKit (Quicktime Kit). It's mind-numbingly simple to set up a web cam view, record to files, grab frames, etc. It's built in and designed to make this sort of thing mostly drag-and-drop for developers.
MonoMac is just one alternative. There are Monobjc, CocoSharp, NObjective, MObjc / MCocoa and ObjC# (I cannot choose between them). Theese are all "bridges" between Mono and Cocoa, what mean you can use Cocoa API in Mono application. But I don't want to use the API directly. I just want a dinamically linked library, which provide me some function for WebCam handling (as I said, I did this up to this time on Windows). In other words: I need a wrapper in Mono for QTKit.
PS: If I rewrite the application in object-c that means several months, and double work in the future when the application will grow. I love object-c but I hate to work unnecessary.
I tried the accepted code in XCode, and when I tried to port to Monodevelop, several classes are missing, eg. QTCaputureSession, QTCaputreDeviceInput, CVimagebuffer.
(Sorry, I cannot edit my previous messages, this is another account.)

sample mac Firefox Plugins?

I'm trying to re-write an old image-viewing plugin for the mac. The old version uses QuickDraw (I said it was old) and resources (really really old) and so it doesn't work in Firefox 3.6 (which is why I'm re-writing it)
I know some Objective C, and so I figure I'm going co re-write this in that using new-fangled Mac routines and nibs, etc. However, I don't know how to start. I've got the BasicPlugin example that comes with mozilla source, so I know how to create a plugin with entrypoints, etc. However, I don't know how to create the nib, and how to interface Obj-C with the entrypoints, etc.
Does anyone know of a more advanced sample for mac than BasicPlugin.bundle? (Preferably simple enough that I can just look at it and understand it...)
thanks.
Sadly i don't really know of any good "intermediate" example. However, integrating Obj-C isn't that difficult. Thus, following is a short overview of what needs to be done.
You can use Obj-C and C/C++-sources in the same project, its just recommendable to keep them seperated to some extent. This can for example be done by letting the source file with the entry-points and other NPAPI-interfacing stay plain C or C++ files and e.g. forward calls into the plugin from there.
Opaque pointers help to keep a clean seperation, see e.g. here.
The main changes to your plugin include switching to different drawing and event models. These have to be negotiated in NPP_New(), here is an example for the drawing model. When using Cocoa and to support 64bit enviroments, you need to use the Cocoa event model.
To draw UI elements you should be able to use a NSGraphicsContext from the CGContextRef and then draw an NSView in the context. See also the details provided in this post and its follow-ups.