Rotate React Native Image around a pivot point - react-native

I would like to rotate a React Native Image based on an x/y point on the image, rather than the default centre.
Say I have a needle image and I want to sweep it around, like a speedometer on a car. I need to rotate it based upon a pivot point at the end of the image, rather than the image centre.
Can this be done?

Related

custom circle , polygon, drawing line from one place to another in react native

I have google Map. in react native Now I want to draw circle on any location I want with 2km radius and I also want to draw line from on location to another location. is possible. if yes please tell me how.

How to make 3d text to appear on camera view in blender

I am creating a 3d text on blender2.8 but I am having a hard time trying to make it appear on the camera view.
I have tried rotating and scaling the camera but it doesn't help.
The camera object is the focal point for the final image, like in real life, if you want to get more in the picture you need to move the camera back or zoom out.
With the camera close -
With the camera far away -
To zoom out, you reduce the camera's focal length.

360º degree camera in three.js

Does anyone know how to create a 360º camera in three.js?
I'm trying to render the entire scene as a 360º panorama like you would with a go pro 360 rig.
I'm trying to recreate a panorama by arranging several screens in a circle and stretch a threejs window across all the screens.
For this I need a very wide window that has a tree.js camera that captures the entire scene in 360º
Is this possible?
It is definitely possible, as it was already implemented: https://github.com/spite/THREE.CubemapToEquirectangular
That library will just export snapshots as png, but looking at the code it should be possible to integrate the same method it uses into realtime-rendering if you want to...
You can't represent a 360-degree view with a conventional view matrix. You need to render to a set of textures (e.g. the six faces of a cube) then combine them into a 360 degree mapping such as equirectangular.

iOS objective C converting coordinates from Absolute image location to my views coordinate system

I am using CIDetector to find faces in a picture.
The coordinates of faces it returns are the absolute coordinates in the image file (The image dimensions are much larger than the screen size obviously).
I tried to use the converRect:toView command. The image itself is not a UIView so the command doesn't work, also I have a few views embedded inside each other where finally the image is being shown.
I want to convert the bounds of the found faces in the image to the exact location of the face being shown on the screen in the embedded image.
How can this be accomplished?
Thanks!
The image being shown on the phone - the image is scaled to fit the screen with aspect fit
The coordinates from CIDetecter (CoreImage) are flipped relative to UIKit coordinates. There are a bunch of tutorials out there on iOS Face Detection but most of them are either incomplete or mess up the coordinates. Here's one that is correct: http://nacho4d-nacho4d.blogspot.com/2012/03/coreimage-and-uikit-coordinates.html
One thing to note: the tutorial uses a small image so the resulting coordinates do not have to be scaled to the on-screen (UIImageView) representation of the image. Assuming you use a photo taken with the iPad camera, you will have to scale the coordinates by the amount the source image is scaled (unless you reduce its size before running the face detection routine -maybe not a bad idea). You may also need to rotate the image for the correct orientation.
There is a routine in one of the answers here for rotating/scaling: UIImagePickerController camera preview is portrait in landscape app
And this answer has a good routine for finding the scale of an image when presented by a UIImageView using 'aspect fit': How to get the size of a scaled UIImage in UIImageView?
You will need to use the scale in order to map the CIDetector coordinates from the full size image to the scaled down image shown in a UIImageView.

Camera image size

I am writing a Cocoa application for mac osx. I'm trying to figure out how to determine the size of an image that will be captured by a camera? I would like to know the size of the image that will be captured so I can setup a view with an aspect ratio that won't distort the image. For example, if my view is defined to be 640x360 and my camera captures images that are 640x480, the displayed image looks short and fat. I'm also displaying some other layers over the image and I need the image size to be able to scale and position the layers properly.
I won't know the type of camera that is attached until run-time so I'd like to be able to interrogate the device and get attributes like image size. Thanks for the help...
You are altering the aspect ratio of the image when you capture in 640x360 instead of 640x480 or 320x240. You are doing something similar as a resize, using the whole image and making it a different size.
If you don't want to distort the image, but use only a portion of it you need to do a crop. Some hardware support cropping, others don't and you have to do it in software. Cropping is using only portions of the original image. In your case, you would discard the bottom 120 lines.
Example (from here):
The blue rectangle is the natural, or original image and the red is a crop of it.