Plugin for eclipse for displaying 3D objects - eclipse-plugin

I have a XML file based on UML, which contains information about the classes, methods, packages and I have to interpret it and display it into 3D format where in classes would be represented by rectangles and packages with some other geometric figure and so forth.Also I would be able to change its view but just moving my mouse in a particular direction.
I was looking in for a particular plugin for eclipse which can help me in doing that. I had in my list few options like Java FX, Java 3D, ardor 3D and flex plugin with eclipse. which would be the best and easy to use one from the above for catering to my requirements,

The official Eclipse poroject for this kind of visualization is:
GEF3D
GEF3D is an Eclipse GEF extension bringing 3D to diagram editing. That is with GEF3D you can create 3D diagrams, 2D diagrams and combine 3D with 2D diagrams.
GEF3D extends GEF by providing 3D enabled draw and controller classes. Instead of drawing 2D figures, you can now draw 3D figures.
(Note: even though the above illustration is not exactly what you want, you still can use GEF3D to achieve what you want and design 3D rectangles)

Related

SharpDX How To Render a 3D Environment

I just started coding some basics in SharpDX (VB.net) and I already got it to Render a 2D triangle. And I know how to render other 2D stuff, but I want to create something in 3D where I'm able to rotate the camera around some cubes. I tried it, but failed at converting the 3D Space to screen coordinates. Now Here are my Questions:
How can I calculate a Matrix for Perspective projection?
How can I pass that Matrix to my Vertex Shader
How can I make the Camera rotate around the Objects when I drag the mouse over the screen?
Please explain these things to me and give some code examples. I'm just a Beginner in SharpDX and everything I found was just not understandable for me.
A few things you can do when you first start.
Firstly, there are some great examples you can leverage (Even in c# but you need VB) that you can use to learn from.
I suggest you look at this within the Sharpdx repository. Sharpdx direct 3d 11 samples
Within these examples (especially triangle example), it goes through the basics including setting up the device, the creation of simple resources to bind to your GPU and compiling the bytecode.
The samples though use the effects methodology, which is deprecated and as such once you become familiar with compiling code, I would advise moving away from this paradigm.
The more advanced examples will show you how to set up your matrices.
The last item you wanted to know about is mouse movement. I would advise just having a look at MSDN around mousemove events. You will need to bind one to your window/control and then read the deltas. Use those deltas to create your rotation/movement based upon this. Look into Vector3 (sharpdx), basically, you need to do this all in vector space and then create the various translation/rotation matrices from this.
Hope this is start.

How can a 3D game render an object without having a sprite for every single angle?

When learning to program simple 2D games, each object would have a sprite sheet with little pictures of how a player would look in every frame/animation. 3D models don't seem to work this way or we would need one image for every possible view of the object!
For example, a rotating cube would need a lot images depicting how it would look on every single side. So my question is, how are 3D model "images" represented and rendered by the engine when viewed from arbitrary perspectives?
Multiple methods
There is a number of methods for rendering and storing 3D graphics and models. There are even different methods for rendering 2D graphics! In addition to 2D bitmaps, you also have SVG. SVG uses numbers to define points in an image. These points make shapes. The points can also define curves. This allows you to make images without the need for pixels. The result can be smaller file sizes, in addition to the ability to transform the image (scale and rotate) without causing distortion. Most 3D graphics use a similar technique, except in 3D. What these methods have in common, however, is that they all ultimately render the data to a 2D grid of pixels.
Projection
The most common method for rendering 3D models is projection. All of the shapes to be rendered are broken down into triangles before rendering. Why triangles? Because triangles are guaranteed to be coplanar. That saves a lot of work for the renderer since it doesn't have to worry about "coloring outside of the lines". One drawback to this is that most 3D graphics projection technologies don't support perfect spheres or other round surfaces. You have to use approximations and other tricks to make round surfaces (although there are some renderers which support round surfaces). The next step is to convert or project all of the 3D points into 2D points on the screen (as seen below).
From there, you essentially "color in" the triangles to make everything look solid. While this is pretty fast, another downside is that you can't really have things like reflections and refractions. Anytime you see a refractive or reflective surface in a game, they are only using trickery to make it look like a reflective or refractive material. The same goes for lighting and shading.
Here is an example of special coloring being used to make a sphere approximation look smooth. Notice that you can still see straight lines around the smoothed version:
Ray tracing
You also can render polygons using ray tracing. With this method, you basically trace the paths that the light takes to reach the camera. This allows you to make realistic reflections and refractions. However, I won't go into detail since it is too slow to realistically use in games currently. It is mainly used for 3D animations (like what Pixar makes). Simple scenes with low quality settings can be ray traced pretty quickly. But with complicated, realistic scenes, rendering can take several hours for a single frame (as is the case with Pixar movies). However, it does produce ultra realistic images:
Ray casting
Ray casting is not to be confused with the above-mentioned ray tracing. Ray casting does not trace the light paths. That means that you only have flat surfaces; not reflective. It also does not produce realistic light. However, this can be done relatively quickly, since in most cases you don't even need to cast a ray for every pixel. This is the method that was used for early games such as Doom and Wolfenstein 3D. In early games, ray casting was used for the maps, and the characters and other items were rendered using 2D sprites that were always facing the camera. The sprites were drawn from a few different angles to make them look 3D. Here is an image of Wolfenstein 3D:
Castle Wolfenstein with JavaScript and HTML5 Canvas: Image by Martin Kliehm
Storing the data
3D data can be stored using multiple methods. It is not necessarily dependent on the rendering method that is used. The stored data doesn't mean anything by itself, so you have to render it using one of the methods that have already been mentioned.
Polygons
This is similar to SVG. It is also the most common method for storing model data. You define the geometry using 3D points. These points can have other properties, such as texture data (in the form of UV mapping), color data, and whatever else you might want.
The data can be stored using a number of file formats. A common file format that is used is COLLADA, which is an XML file that stores the 3D data. There are a lot of other formats though. Fundamentally, however, all file formats are still storing the 3D data.
Here is an example of a polygon model:
Voxels
This method is pretty simple. You can think of voxel models like bitmaps, except they are a bunch of bitmaps layered together to make 3D bitmaps. So you have a 3D grid of pixels. One way of rendering voxels is converting the voxel points to 3D cubes. Note that voxels do not have to be rendered as cubes, however. Like pixels, they are only points that may have color data which can be interpreted in different ways. I won't go into much detail since this isn't too common and you generally render the voxels with polygon methods (like when you render them as cubes. Here is an example of a voxel model:
Image by Wikipedia user Vossman
In the 2D world with sprite sheets, you are drawing one of the sprites depending on the state of the actor (visual representation of your object). In the 3D world you are rendering a model for your actor that is a series of polygons with a texture mapped to it. There are standardized model files (I am mostly familiar with Autodesk 3DS Max), in which the model and the assigned textures can be packaged together (a .3DS or .MAX file), providing everything your graphics library needs to render the object and its textures.
In a nutshell, you don't use images for each view of a 3D object, you have a model with a texture rendered on it, creating a dynamic view as it is rendered by the graphics library.

How to put a custom 3D object on a 3D MKMapView in iOS 7.0

I am working on iOS 7.0 app which contains MKMapView. I succeeded to make this map 3D using MKMapCamera, but I have no idea how to put some custom objects, written in COLLADA, to wherever I want on this 3D space.
First question: is it possible to put custom 3D objects on MKMapView?
Second question: if it's possible, what kind of information should I prepare for it? If only 3D polygons (set of 3D vertices) are available, I should look for some different libraries which can convert COLLADA files into some kind of polygon set class.

How can I draw shapes other than cubes?

How can I draw shapes other than cubes with x3dom? For instance, one shape I want to draw is a semi circle/cylinder. I can only find documentation to draw a cube which I already do. I have not found good documentation yet for x3dom.
X3DOM actually just wraps X3D. So you may better start with the X3D documentation. A quite good overview can be found here: X3D Slides.
There are some other primitives provided like:
<Shape>
<Sphere radius='1'/>
<Appearance>
<Material/>
</Appearance>
</Shape>
Also the examples provide a good starting point.
to use a cylinder just use this in the shape node
<Cylinder />
as mistapink says check out the x3d docs, there are a few basic x3d primitives, such as sphere box cylinder cone.
anything more elaborate would require you to use pointsets etc. or you can model your own objects and export to x3d through lots of 3d apps, blender , 3d max , lightwave etc

3D Transformations on a Quartz2D path — Drawing Application

I'm in the planning stage of writing a Cocoa drawing application (for Mac, not iOS), and I'm trying to discern whether one of my features is technically possible via any of the drawing frameworks. Any help or relevant information would be greatly appreciated.
The idea is to apply a 3D transformation to an object drawn with Quartz2D. I've considered capturing the relevant portion of the canvas View (where objects are drawn) as an image and sending it to Core Animation, but that doesn't seem like the best option. Since this is a drawing application, it's less about 3D animation than it is about the transformed shape. This solution is also less than ideal because I assume that if the 2D object were a vector path rather a bitmap image, I would have to rasterize it to apply such a transformation. The ideal implementation would enable the user to dynamically rotate a flat object in 3 dimensions until she found a suitable orientation, lock in this transformation, and still be able to manually adjust the path's vector points.
Is this feasible? Would it require working directly with OpenGL? Help of any kind is most welcome.
Thank you!
Seems to me that anything you'd do with a 3D transform, you should be able to do with multiple affine transforms. See UIBezierPath's -applyTransform method.