Perspective Rotation about Y axis - perspectives

I have a 2D image and I want to create a anaglyph image for this single 2D image. To do this I need to create Left and Right views. I will considar my 2D image as Left view and I want to create Right View now.
I came to know that the perspective rotation (about Y axis) and perspective skews will give the right image.
I know that the perspective projection is related to 3D.
Basically I am new to 3D programming.
Can you plz explain how to do perspective rotation abuout Y-axis. And how can I apply this to my 2D image.I am using C++.
Thank you verymuch
N.A.Reddy.

You can't create an anaglyph from a 2D image. You need either two 2D images that were taken slightly apart from each other or you need a 3D image. You can try and generate 3D information from a 2D image but that's almost impossible and an active area of research.

Related

2D image to 3D world coordinate in perspective view

I have been trying to locate detected objects from 2D image in the 3D space for a single fixed camera installed at the height.
I went trough the similar questions, but the perspective view is not mentioned.
What I have:
The height of the camera
Calibration parameters
The exact location of one fixed object in view
I've wrote a set of solutions to this kind of problem. 3d points reconstruction from 2d coordinates (yes, "in perspective"), are obtained by means of the extrinsics matrix. See https://github.com/rodolfoap/screen2world-k. Other methods are linked from there.

Texture being stretched horizontally in Cinema 4D

I have a Poliigon Texture Demo c4d file. The file includes a sphere with a texture which renders correctly (bottom sphere in image). However when I create a sphere (top sphere in image), convert it to a polygonal object and apply the same texture it is being stretched horizontally.
I can fix this by changing the "Length U" setting to 50% in the Texture Tag but I notice that the sphere below does not need this modification so I was wondering how to convert the top sphere to a polygonal object the same way the bottom sphere is.
Cinema 4d Example
I have included a screengrab. The only notable difference is that the sphere below has additional diagonal division.
I am quite new to 3D so hope this all makes sense.
I think you only need to change the Sphere's Type, to a triangular type, like the sphere at the bottom.
If this helps, please consider up-voting and marking you question as solved

How to detect an image between shapes from camera

I've been searching around the web about how to do this and I know that it needs to be done with OpenCV. The problem is that all the tutorials and examples that I find are for separated shapes detection or template matching.
What I need is a way to detect the contents between 3 circles (which can be a photo or something else). From what I searched, its not to difficult to find the circles with the camera using contours but, how do I extract what is between them? The circles work like a pattern on the image to grab what is "inside the pattern".
Do I need to use the contours of each circle and measure the distance between them to grab my contents? If so, what if the image is a bit rotated/distorted on the camera?
I'm using Xamarin.iOS for this but from what I already saw, I believe I need to go native for this and any Objective C example is welcome too.
EDIT
Imagining that the image captured by the camera is this:
What I want is to match the 3 circles and get the following part of the image as result:
Since the images come from the camera, they can be rotated or scaled up/down.
The warpAffine function will let you map the desired area of the source image to a destination image, performing cropping, rotation and scaling in a single go.
Talking about rotation and scaling seem to indicate that you want to extract a rectangle of a given aspect ratio, hence perform a similarity transform. To define such a transform, three points are too much, two suffice. The construction of the affine matrix is a little tricky.

QSGGeometryNode depth (z) problems with 3 vertices

I am drawing a 3D geometry (Point3D vertices) in a Qml scene graph with a custom QSGGeometryNode and QSGTransformNode. This works except that the 3D model is cut off at a certain z-coordinate (z is the depth axis in Qml). First I expected that the problem is due to intersection with the Qml 2D plane. But I tried to move the model along the z axis and it gets always cut off (as if there is a local model frustum clipping plane).
What could be the source of this problem?
Regards,
Unfortunately you can't "just" render 3D content inside the scene, as the scene graph will compress your Z values to make them honour proper stacking of the items.
If you have a 3D object, you may want to use QQuickFramebufferObject instead (see also this blog post).

World space to screen space (perspective projection)

I'm using a 3d engine and need to translate between 3d world space and 2d screen space using perspective projection, so I can place 2d text labels on items in 3d space.
I've seen a few posts of various answers to this problem but they seem to use components I don't have.
I have a Camera object, and can only set it's current position and lookat position, it cannot roll. The camera is moving along a path and certain target object may appear in it's view then disappear.
I have only the following values
lookat position
position
vertical FOV
Z far
Z near
and obviously the position of the target object.
Can anyone please give me an algorithm that will do this using just these components?
Many thanks.
all graphics engines use matrices to transform between different coordinats systems. Indeed OpenGL and DirectX uses them, because they are the standard way.
Cameras usually construct the matrices using the parameters you have:
view matrix (transform the world to position in a way you look at it from the camera position), it uses lookat position and camera position (also the up vector which usually is 0,1,0)
projection matrix (transforms from 3D coordinates to 2D Coordinates), it uses the fov, near, far and aspect.
You could find information of how to construct the matrices in internet searching for the opengl functions that create them:
gluLookat creates a viewmatrix
gluPerspective: creates the projection matrix
But I cant imagine an engine that doesnt allow you to get these matrices, because I can ensure you they are somewhere, the engine is using it.
Once you have those matrices, you multiply them, to get the viewprojeciton matrix. This matrix transform from World coordinates to Screen Coordinates. So just multiply the matrix with the position you want to know (in vector 4 format, being the 4ยบ component 1.0).
But wait, the result will be in homogeneous coordinates, you need to divide X,Y,Z of the resulting vector by W, and then you have the position in Normalized screen coordinates (0 means the center, 1 means right, -1 means left, etc).
From here it is easy to transform multiplying by width and height.
I have some slides explaining all this here: https://docs.google.com/presentation/d/13crrSCPonJcxAjGaS5HJOat3MpE0lmEtqxeVr4tVLDs/present?slide=id.i0
Good luck :)
P.S: when you work with 3D it is really important to understand the three matrices (model, view and projection), otherwise you will stumble every time.
so I can place 2d text labels on items
in 3d space
Have you looked up "billboard" techniques? Sometimes just knowing the right term to search under is all you need. This refers to polygons (typically rectangles) that always face the camera, regardless of camera position or orientation.