I'm building an augmented reality game in iOS5 on devices that support gyroscopes.
I want to use CMAttitudeReferenceFrameXTrueNorthZVertical to map the device orientation and find out which CLLocation the device is looking toward. This is a new orientation available in iOS5 based on sensor fusion algorithms. It is supposed to be much smoother than the accelerometer based code.
I see a lot of examples of pre-iOS5 code, which use accelerometer and older implementations of the AR that use accelerometer code. To rewrite such code, I need to understand how to map the new CMAttitude and current location into a vector from the current location to some other CLLocation defined by drawing a vector from the center of the screen, out the back of the iphone towards that reference point.
Thank you for any hints!
Look at the APple pARK sample.. it does a perspective transform that covers the screen then projects the 3D coordinate from your location to the other geo location. https://developer.apple.com/library/ios/#samplecode/pARk/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011083
Related
I'm working on an application for Android & iOS to show points of interest over the camera. ARkit & ARcore has poor compatibility nowdays.
Could you recommend me some library to do this? If it comes with an example, better! I know viro-media, but I don't understand how to do this using that library.
I don't want 3D models, just markers over the camera, similar to the attachment image.
To do this with Viro React -- and in AR in general -- the trick is to recognize that there are two coordinate systems:
The local coordinate system of your device, which we'll call 'AR space'. In Viro, this is centered at the user's initial position when the application starts, and is in meters.
Geographic coordinates (latitude and longitude).
To position the overlays, you have to convert your content from geographic coordinates into AR space. This is a two-step process. First project the spherical geographic coordinates onto a 2D plane -- the Web Mercator is great for this. Then translate the projected coordinates by the device's initial projected position.
The device's initial projected position can be derived by projecting its initial geographic position. In Viro React, you can use the Geolocation module to grab this when the user starts the app.
Finally, you'll need to do a similar transformation for the user's bearing: converting from compass direction to device orientation in AR space.
For this to work well you'll likely have to figure out how handle inaccurate geolocation lookups (e.g. what happens if the location retrieved from the device is inaccurate), and may also have to account for drift: over time the two coordinate systems may start to fall out of sync.
The last part, creating the info cards, is easy with Viro -- you either pre-bake the images with text and use ViroImage, or if the cards need to be more dynamic you can use a ViroFlexView.
I am also interested in this one and I'm trying out ViroReact!
I find a bit difficult to understand how to make this work when the lat's and long's have been converted to x-y-values. What should the z-value be?
Let's say you have the lat-lon coordinates [59, 10] as the user location you want to show where [59, 11] is relative to your location. How to you build that in a ViroARScene?
<ViroNode position={ **userLocationFromLatLonCartesian** }>
<ViroBox position={ **poiLocationFromLatLonToCartesian** }/>
</ViroNode>
So how do you calculate the scale, position and rotation, so that the object will be visible?
Seem like https://github.com/proj4js/proj4js is a library that could provide conversions from latlon to x-y values
I found that both android and ios AR sdk support location base AR View refererence:
https://developers.google.com/ar/develop/ios/geospatial/quickstart and https://developer.apple.com/documentation/arkit/argeoanchor
Why should you ever want something like this?
I want to track a single user that is mounted above the ground in a horizontal position. The user is facing downwards to allow free movement of legs and arms. Think of swimming for example.
I mounted the Kinect at the ceiling facing downwards so I have a free view of all extremities.
The sensor is rotated 90° in z-axis to have the maximum resolution (you're usually taller than wide).
Therefore the user is seen from the backside, rotated by 90°. It is impossible to get a proper skeleton from OpenNI 1.5. My tests showed that OpenNI is expecting the user facing the camera with the head up in y-axis (see my other answer). Microsofts SDK is the same but I excluded it here because it won't allow you to change the source code and cannot be adapted. OpenNI 2.0 is not working with the current SensorKinect to interface the Kinect in Linux. So:
Which class is generating the skeleton in OpenNI 1.5.x?
My best guess would be to rotate the prototype skeleton by y 180° and z 90°. If you know where I could find this.
EDIT: As I just learned there is no open source software that generates a skeleton from depth images so I fall back to the question in the header:
How can I get a user skeleton from a rotated back view?
I'm looking for an iPhone based, preferably iOS5 with ARC project that uses the iPhone4's gyro to look around in spherical coordinate system. The phone is at the center of a sphere, and by looking at the sensor output, it can understand where the camera is pointing in spherical coordinates.
I'm not sure if what I'm thinking of can be accomplished with iOS5 CMAttitude which blends sensors of iPhone4 can it?
I intend to use the project to control a robotic turret and make it be able to "look" at a particular point within a spherical coordinate system.
What comes to mind is that a 360 panorama or a tour wrist like app would be a good starting point for such a project. Is there something that is similar, open source and uses native iOS Core Motion framework?
Thank you!
If you would like to license the TourWrist technology, please let me know. For example, we license the TourWrist capture and viewer APIs/SDKs.
Dan Smigrod
via: support#TourWrist.com
I am trying to develop an iphone application which needs to show a 360 degree video like the one and rotate the video as per the phone movement. How can i do this? Is it possible to do this with normal MPMovieplayer controller?
I don't think you can do this with a normal MPMoviePlayerController, but there are several libraries out there to achieve this. Have a look here:
PanoramaGL
Panorama 360
They work with OpenGL and you can embed them in your Objective-C code.
EDIT:
As #Mangesh Vyas kindly pointed out those are intended to use with fixed images only. However they might be a suitable starting point for embedding video as well, if you modify the code accordingly. They already do the handling of direction, accelerometer etc. so you don't have to implement all that yourself.
I need some information about using cocos2d in iPad.
Can we use 2048x2048 sprite sheets ? I read in this form that we can use but with limitation not more than 3 or 4 sprite sheets.
But, I have 10 animations in my game. maximum of 4 animations run at a time.
Can we use the CCDirectors in AppDelegate in the same way as we use in iPhone ?
if( ! [CCDirector setDirectorType:CCDirectorTypeDisplayLink] )
[CCDirector setDirectorType:CCDirectorTypeDefault];
[[CCDirector sharedDirector] setPixelFormat:kPixelFormatRGBA8888];
[CCTexture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_RGBA8888];
What can be the maximum size of the image that we can use?
Any limitations regarding the cocos2d and iPad please post them.
Thank you.
The iPad resolution is 1024x768. It doesn't make sense to use higher resolution images in your game unless you intend to zoom in to see a LOT more detail. Even so you need to evaluate if you really want to do this. If you do, be sure to turn on mipmapping for your textures in Cocos2D.
If you use higher resolution images, the iOS device's PowerVR chip (video processor) is going to pay and your game will perform much slower. If you can help it I would either tile much lower pixel sized images (say 256x256 or lower). In the end it depends on how fast your game runs on your target device.