selected 3D model on camera view in vuforia SDK - ios7

I am working on augmented reality project using vuforia unity extension for iOS. I have a list of 3d models(.3ds). i want 3D models should placed on camera view when user selected a model from that list. I did put 3D models in to my project asset.Is there any tutorial available to render 3D models on camera view? Please help me.

I'm alos looking for same and i got one another SDK you can try for this SDK :"metaio SDK.framework"
This is the link
https://dev.metaio.com/sdk/getting-started/ios/creating-a-new-ar-application/
you can check this link:
http://augmentedev.com/augmented-reality-sales-design/

Related

Is it possible to get the 3D view from Kinect v2 in Windows 10

I want to get access to the 3D view. Please see below the screenshot of 3D view displayed in Kinect Studio:
Is it possible to access this view? may be by using Microsot API? Or something like Point Cloud data?

How to put a custom 3D object on a 3D MKMapView in iOS 7.0

I am working on iOS 7.0 app which contains MKMapView. I succeeded to make this map 3D using MKMapCamera, but I have no idea how to put some custom objects, written in COLLADA, to wherever I want on this 3D space.
First question: is it possible to put custom 3D objects on MKMapView?
Second question: if it's possible, what kind of information should I prepare for it? If only 3D polygons (set of 3D vertices) are available, I should look for some different libraries which can convert COLLADA files into some kind of polygon set class.

Draw custom graph in iOS

I need to draw custom graph like those 3 images in iOS, can someone tell me how to accomplish this. Is it possible to draw those graph using core-plot open source library or i need to use quartz2d library explicitly.
There are some beautiful tutorial on how to draw custom graph in iOS.
Link1 , Link2. For More please refer this section LINK
You can use core-plot framework / XYPieChart for pie chart

How to map kinect skeleton data to a model?

I have set up a Kinect device and written a simple program that reads the stream to a QImage using OpenNI 2.0. I have set up skeleton tracking with NiTE 2.0, so I have access to the coordinates of all the 15 joints. I have also set up a simple scene using SceniX. The hand coordinates provided by the skeleton tracking are beeing used to draw 2 boxes to represent the hands.
I would like to bind the whole skeleton to a (rigged)model, and cant seem to find any good tutorials. Anyone have any idea how I should proceed?
depending on your requirements you could look at something like this for Unity Engine https://www.assetstore.unity3d.com/en/#!/content/10693
There is also a Plugin for the Unreal 4 Engine called KINECT 4 UNREAL FROM OPAQUE MULTIMEDIA
But if you have to write it all by hand for yourself, i have done something similar using OpenGL.
I used Assimp http://assimp.sourceforge.net/ to be able to load animated Collada models and OpenNi with NiTE for skeletal tracking. I then used the rotation data from the Nite skeleton and applied it to the corresponding Bones of my rigged mesh, overwriting the rotation values of the animation. Don't use positional Data. It will strech your bones and distort the mesh.
There are many sources for free 3D Models, like TF3DM.com . I for myself used a custom Rig for my models to be suitable for my code. So you might look into using Blender and how to Rig a Model.
Also remember that the Nite Skeleton has no joint for the Pelvis, and that Nite joints don't inherit their parents rotation, contrary to the bones in a rigged model.
I hope this helps to have something to go on.
You can try DigitalRune, they have examples of binding a rigged model to joints. They have mentioned some examples too. try http://www.digitalrune.com/Support/Blog/tabid/719/EntryId/155/Research-Augmented-Reality-with-Microsoft-Kinect.aspx
Also you would need to know to animate model in blender and export it to XNA or to your working graphics framework. Eg:-http://www.codeproject.com/Articles/230540/Animating-single-bones-in-a-Blender-3D-model-with#SkinningSampleProject132

Live Face Recognition iOS adding 3D object

I need to create an iOS 5 application will run on iPad2 (I can use private API because the App will not be released in App Store) will show live stream from front camera, recognize eyes and render a pair of glasses (I have the 3D model) following face movements.
Which is the best approach and the best technology (e.g. OpenGL ES) I can use?
Just use the libraries included in XCode. I have a sample here. It's got everything you need.
It uses the AVFoundation, CoreImage, CoreMedia, and CoreVideo frameworks.