A-Frame object-centric camera rotation - camera

Ultimately, when using a VR headset, I'd like for the camera to rotate around the object when a viewer's head moves. When I look up, for instance, I want to see the bottom of the object and when I look down, I want to see the top of the object. The google exhibit demos with cardboard have this feature. I'm a newbie when it comes to this stuff, so be gentle. If there are tutorials or literature you'd recommend to become more familiar with this stuff, I'm all ears.

Related

AnyLogic travelling camera

I want my model to be displayed in a cool video, in which a camera goes through my model.
Is there any guide for smooth tracking shots and functions that can be used?
I could not find good information in the AnyLogic Library :(
You can design any camera movement you like.
Simplest way is to create a custom agent with a camera at its origin and make the agent move around. Or add a camera to existing agents to "view over their shoulder".
Or check the example models with search term "camera".
Or make your camera positions dynamic:
You can do anything you want :)

SharpDX How To Render a 3D Environment

I just started coding some basics in SharpDX (VB.net) and I already got it to Render a 2D triangle. And I know how to render other 2D stuff, but I want to create something in 3D where I'm able to rotate the camera around some cubes. I tried it, but failed at converting the 3D Space to screen coordinates. Now Here are my Questions:
How can I calculate a Matrix for Perspective projection?
How can I pass that Matrix to my Vertex Shader
How can I make the Camera rotate around the Objects when I drag the mouse over the screen?
Please explain these things to me and give some code examples. I'm just a Beginner in SharpDX and everything I found was just not understandable for me.
A few things you can do when you first start.
Firstly, there are some great examples you can leverage (Even in c# but you need VB) that you can use to learn from.
I suggest you look at this within the Sharpdx repository. Sharpdx direct 3d 11 samples
Within these examples (especially triangle example), it goes through the basics including setting up the device, the creation of simple resources to bind to your GPU and compiling the bytecode.
The samples though use the effects methodology, which is deprecated and as such once you become familiar with compiling code, I would advise moving away from this paradigm.
The more advanced examples will show you how to set up your matrices.
The last item you wanted to know about is mouse movement. I would advise just having a look at MSDN around mousemove events. You will need to bind one to your window/control and then read the deltas. Use those deltas to create your rotation/movement based upon this. Look into Vector3 (sharpdx), basically, you need to do this all in vector space and then create the various translation/rotation matrices from this.
Hope this is start.

Implementing a 3D Minimap for VR in Unreal Engine

I am working on a VR tower defense game. I want to place my towers on a small map of the field and at the same time show the field/units/towers on the small map, in 3D. Like this:
http://halo.bungie.net/images/games/Halo3ODST/imagery/screenshots/H3ODST_PreparetoDropCinematic.jpg
The map would be something like a small clone of the field. Is there a way to do so with camera etc. So that my minimap is just a re-render/clone of the field.
Sorry if this is the wrong place, but the Unreal Engine Forum is not working at the moment.
I dont know of a simple camera projection to 3D (since the camera map would be about scene capture onto 2D textures)
You can do a fake/tricky camera implementation tho. You just show a normal camera projection and put it into the world. Now you let the actual camera position depend on where the users location is in relation to the map.
You can cover this up with particle effects so it doesnt look like some TV that is following you....
depending on the complexity of your game and how u like the solution obove it might be better to implement a calculated model.
Like you have miniature models of everything u want to show on the map and then let a map class project every "object that is shown on map" the miniature version onto the map object.
Ofc this requires tons of work compared to just setting up a camera but you have alot of better control of how the map looks and what it can do (u could add functionallity, control options and special views etc....)

Kinect: How to obtain a skeleton from back view?

Why should you ever want something like this?
I want to track a single user that is mounted above the ground in a horizontal position. The user is facing downwards to allow free movement of legs and arms. Think of swimming for example.
I mounted the Kinect at the ceiling facing downwards so I have a free view of all extremities.
The sensor is rotated 90° in z-axis to have the maximum resolution (you're usually taller than wide).
Therefore the user is seen from the backside, rotated by 90°. It is impossible to get a proper skeleton from OpenNI 1.5. My tests showed that OpenNI is expecting the user facing the camera with the head up in y-axis (see my other answer). Microsofts SDK is the same but I excluded it here because it won't allow you to change the source code and cannot be adapted. OpenNI 2.0 is not working with the current SensorKinect to interface the Kinect in Linux. So:
Which class is generating the skeleton in OpenNI 1.5.x?
My best guess would be to rotate the prototype skeleton by y 180° and z 90°. If you know where I could find this.
EDIT: As I just learned there is no open source software that generates a skeleton from depth images so I fall back to the question in the header:
How can I get a user skeleton from a rotated back view?

Create mock 3D "space" with forwards and backwards navigation

In iOS, I'd like to have a series of items in "space" similar to the way Time Machine works. The "space" would be navigated by a scroll bar like feature on the side of the page. So if the person scrolls up, it would essentially zoom in in the space and objects that were further away will be closer to the reference point. If one zooms out, then those objects will fade into the back and whatever is behind the frame of refrence will come into view. Kind of like this.
I'm open to a variety of solutions. I imagine there's a relatively easy solution within openGL, I just don't know where to begin.
Check out Nick Lockwood's iCarousel on github. It's a very good component. The example code he provides uses a custom carousel style very much like what you describe. You should get there with just a few tweaks.
As you said, in OpenGL(ES) is relatively easy to accomplish what you ask, however it may not be equally easy to explain it to someone that is not confident with OpenGL :)
First of all, I may suggest you to take a look at The Red Book, the reference guide to OpenGL, or at the OpenGL Wiki.
To begin, you may do some practice using GLUT; it will help you taking confidence with OpenGL, providing some high-level API that will let you skip the boring side of setting up an OpenGL context, letting you go directly to the drawing part.
OpenGL ES is a subset of OpenGL, so essentially has the same structure. Once you understood how to use OpenGL shouldn't be so difficult to use OpenGL ES. Of course Apple documentation is a very important resource.
Now that you know a lot of stuff about OpenGL you should be able to easily understand how your program should be structured.
You may, for example, keep your view point fixed and translate the world (or viceversa). There is not (of course) a universal solution, especially because the only thing that matters is the final result.
Another solution (maybe equally good, it depends on your needs), may be to simply scale up and down images (representing the objects of your world) to simulate the movement through the object itself.
For example you may use an array to store all of your images and use a slider to set (increase/decrease) the dimension of your image. Once the image becomes too large for the display you may gradually decrease alpha, so that the image behind will slowly appear. Take a look at UIImageView reference, it contains all the API's you need for it.
This may lead you to the loss of 3-dimensionality, but it's probably a simpler/faster solution than learn OpenGL.