How to make the Apple MapKit VR flyover in ARKit framework? - mapkit

I'm trying to see if it is possible to have the Apple Maps VR Flyover displayed here but instead of in VR, have it in AR with ARKit.
I'm aware the map data can be accessed within MapKit but not able to extract the map and 3D building data into ARKit in order to, for example, place the 3D model of a city on my table by detecting a surface.
Is this possible?

Related

Implementing a 3D Minimap for VR in Unreal Engine

I am working on a VR tower defense game. I want to place my towers on a small map of the field and at the same time show the field/units/towers on the small map, in 3D. Like this:
http://halo.bungie.net/images/games/Halo3ODST/imagery/screenshots/H3ODST_PreparetoDropCinematic.jpg
The map would be something like a small clone of the field. Is there a way to do so with camera etc. So that my minimap is just a re-render/clone of the field.
Sorry if this is the wrong place, but the Unreal Engine Forum is not working at the moment.
I dont know of a simple camera projection to 3D (since the camera map would be about scene capture onto 2D textures)
You can do a fake/tricky camera implementation tho. You just show a normal camera projection and put it into the world. Now you let the actual camera position depend on where the users location is in relation to the map.
You can cover this up with particle effects so it doesnt look like some TV that is following you....
depending on the complexity of your game and how u like the solution obove it might be better to implement a calculated model.
Like you have miniature models of everything u want to show on the map and then let a map class project every "object that is shown on map" the miniature version onto the map object.
Ofc this requires tons of work compared to just setting up a camera but you have alot of better control of how the map looks and what it can do (u could add functionallity, control options and special views etc....)

How to put a custom 3D object on a 3D MKMapView in iOS 7.0

I am working on iOS 7.0 app which contains MKMapView. I succeeded to make this map 3D using MKMapCamera, but I have no idea how to put some custom objects, written in COLLADA, to wherever I want on this 3D space.
First question: is it possible to put custom 3D objects on MKMapView?
Second question: if it's possible, what kind of information should I prepare for it? If only 3D polygons (set of 3D vertices) are available, I should look for some different libraries which can convert COLLADA files into some kind of polygon set class.

how to highlight countries in ios maps

I am building an app in which I have to highlight some countries dynamically in the world map.
In short I want to customize the whole view of ios maps as shown in the images.
can this be done using MapKit or is there any other method. Thanks in advance
You want to look into the Mapbox iOS SDK which will allow you to do this and more with a MapKit-like API. In particular, you will want to quickly make a custom map with TileMill using the provided Natural Earth data set for world country borders, enable UTFGrid interactivity so that the tapped regions can be identified, and use the RMShape class on an RMAnnotation onto the map view to add/color country polygons as needed. This sounds a little complex but the tools exist, are entirely free and open source, and I can help you with this process.

Drawing maps without base images

I would like to draw a series of maps in an iOS application. Preferably, without using any image files as a base.
For example, I want to draw a map of the United States with states and counties outlined. Does anyone know of a way to do this?
By draw, I mean draw the map in code. Maybe using Apple's Map kit API?
You might want to look in here
http://planet.openstreetmap.org
to get the data. Then you can use the 2d graphics libraries to draw it.

iOS5 how to get device orientation from CMAttitude using CMAttitudeReferenceFrameXTrueNorthZVertical?

I'm building an augmented reality game in iOS5 on devices that support gyroscopes.
I want to use CMAttitudeReferenceFrameXTrueNorthZVertical to map the device orientation and find out which CLLocation the device is looking toward. This is a new orientation available in iOS5 based on sensor fusion algorithms. It is supposed to be much smoother than the accelerometer based code.
I see a lot of examples of pre-iOS5 code, which use accelerometer and older implementations of the AR that use accelerometer code. To rewrite such code, I need to understand how to map the new CMAttitude and current location into a vector from the current location to some other CLLocation defined by drawing a vector from the center of the screen, out the back of the iphone towards that reference point.
Thank you for any hints!
Look at the APple pARK sample.. it does a perspective transform that covers the screen then projects the 3D coordinate from your location to the other geo location. https://developer.apple.com/library/ios/#samplecode/pARk/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011083