How to put a custom 3D object on a 3D MKMapView in iOS 7.0 - objective-c

I am working on iOS 7.0 app which contains MKMapView. I succeeded to make this map 3D using MKMapCamera, but I have no idea how to put some custom objects, written in COLLADA, to wherever I want on this 3D space.
First question: is it possible to put custom 3D objects on MKMapView?
Second question: if it's possible, what kind of information should I prepare for it? If only 3D polygons (set of 3D vertices) are available, I should look for some different libraries which can convert COLLADA files into some kind of polygon set class.

Related

2D image to 3D world coordinate in perspective view

I have been trying to locate detected objects from 2D image in the 3D space for a single fixed camera installed at the height.
I went trough the similar questions, but the perspective view is not mentioned.
What I have:
The height of the camera
Calibration parameters
The exact location of one fixed object in view
I've wrote a set of solutions to this kind of problem. 3d points reconstruction from 2d coordinates (yes, "in perspective"), are obtained by means of the extrinsics matrix. See https://github.com/rodolfoap/screen2world-k. Other methods are linked from there.

How do I render to multiple 3d targets in Vulkan?

I have some legacy DX11 code that renders to multiple 3d render targets. Destination target is passed via SV_TARGETxx and the slice is set via SV_RenderTargetArrayIndex in GS. Is there any way to do the same in Vulkan?
My plan is to create individual view for each slice of each 3d target and pass them all together as attachments to a single frame buffer, then in GS I can have something like gl_Layer = sliceNo + targetOffsets[xx]. Is there any better solution?
In Vulkan, the GS SV_RenderTargetArrayIndex is called Layer in SPIR-V or gl_Layer in GLSL. It behaves the same as in D3D. You create one view per 3D target, and attach that to the framebuffer. The Layer output from the GS will say which layer (of all the targets) the output primitive is drawn to.
In Vulkan there's no "true" 3D framebuffer attachments, in the sense that after projection to screen space coordinates everything exists in a 2D plane. So attachment image views can have 2D_ARRAY dimensionality, but not 3D. The Image and image view parameter compatibility requirements table says that given a 3D image, you can create a 2D_ARRAY image view with layerCount >= 1. Note that you have to create the image with the VK_IMAGE_CREATE_2D_ARRAY_COMPATIBLE_BIT flag.
So if you want to have N 3D render target images:
Create your N 3D images, with the VK_IMAGE_CREATE_2D_ARRAY_COMPATIBLE_BIT flag.
Create one image view for each image, with VK_IMAGE_VIEW_TYPE_2D_ARRAY and layerCount equal to the number of slices you want to be able to render to.
Create a VkRenderPass with one VkAttachmentDescription per 3D render target, plus whatever others you need for depth/stencil, resolve target, etc.
Create a VkFrameBuffer based on that VkRenderPass, and pass your image views in the VkFrameBufferCreateInfo::pAttachments array. Set VkFramebufferCreateInfo::layerCount to the number of layers/slices you want to be able to render to.
[Edit: Below paragraph can be ignored based on first comment. Leaving it for transparency.]
I'm confused what you're trying to do with SV_Target[n]. In both D3D and Vulkan, if you've got multiple render targets / color attachments, the fragment shader will write to all of them -- if your fragment shader doesn't provide a value for a bound target, the value written is undefined. So SV_Target[n] is used to tell which shader output variables go to which target, but they don't let you write to some without writing to others. Vulkan works similarly, using output variables gl_FragData[n] in GLSL.
If you're talking about having 1 draw call rendered from multiple points of view (but otherwise using the same pipeline) then you want VK_KHR_multiview. This is an extension in Vulkan 1.0, but core in 1.1.
There's an example of it's usage here and the corresponding shader functionality is here. It functions similar to what you seem to describe. You attach multiple images from a texture array to a single framebuffer ("rendertarget" in D3D) and then in the vertex shader you can determine which layer you're rendering to via the gl_ViewIndex variable. There's no need for a geometry shader with this approach.

Drawing maps without base images

I would like to draw a series of maps in an iOS application. Preferably, without using any image files as a base.
For example, I want to draw a map of the United States with states and counties outlined. Does anyone know of a way to do this?
By draw, I mean draw the map in code. Maybe using Apple's Map kit API?
You might want to look in here
http://planet.openstreetmap.org
to get the data. Then you can use the 2d graphics libraries to draw it.

3D Transformations on a Quartz2D path — Drawing Application

I'm in the planning stage of writing a Cocoa drawing application (for Mac, not iOS), and I'm trying to discern whether one of my features is technically possible via any of the drawing frameworks. Any help or relevant information would be greatly appreciated.
The idea is to apply a 3D transformation to an object drawn with Quartz2D. I've considered capturing the relevant portion of the canvas View (where objects are drawn) as an image and sending it to Core Animation, but that doesn't seem like the best option. Since this is a drawing application, it's less about 3D animation than it is about the transformed shape. This solution is also less than ideal because I assume that if the 2D object were a vector path rather a bitmap image, I would have to rasterize it to apply such a transformation. The ideal implementation would enable the user to dynamically rotate a flat object in 3 dimensions until she found a suitable orientation, lock in this transformation, and still be able to manually adjust the path's vector points.
Is this feasible? Would it require working directly with OpenGL? Help of any kind is most welcome.
Thank you!
Seems to me that anything you'd do with a 3D transform, you should be able to do with multiple affine transforms. See UIBezierPath's -applyTransform method.

Plugin for eclipse for displaying 3D objects

I have a XML file based on UML, which contains information about the classes, methods, packages and I have to interpret it and display it into 3D format where in classes would be represented by rectangles and packages with some other geometric figure and so forth.Also I would be able to change its view but just moving my mouse in a particular direction.
I was looking in for a particular plugin for eclipse which can help me in doing that. I had in my list few options like Java FX, Java 3D, ardor 3D and flex plugin with eclipse. which would be the best and easy to use one from the above for catering to my requirements,
The official Eclipse poroject for this kind of visualization is:
GEF3D
GEF3D is an Eclipse GEF extension bringing 3D to diagram editing. That is with GEF3D you can create 3D diagrams, 2D diagrams and combine 3D with 2D diagrams.
GEF3D extends GEF by providing 3D enabled draw and controller classes. Instead of drawing 2D figures, you can now draw 3D figures.
(Note: even though the above illustration is not exactly what you want, you still can use GEF3D to achieve what you want and design 3D rectangles)