Rotate View/Image around specific point, not center (React Native) - react-native

I'm trying to rotate a view around a certain point not the center (just like transform-origin in css) in React Native. At this time elements can only be rotated around the center. The result should be two views/images containing a point laying upon each other at exactly this point, while one of the views is rotated at the desired point. React Native does not support this layout feature yet, so I have to calculate the offsets and then add them using translateX and translateY.
I've tried several approaches e.g. this but I'm not getting the desired result. Reacts matrix transform is deprecated, so better not use it. I know that it has to be simple trigonometry stuff but I don't have any idea.
I think at first I need to diff the position of the point from the center of the object, then apply a angle, and then? If I use the offset calculated before applying the rotation, i don't get the rotation at the desired point.
Thanks for answers!
here is an image of the problem

I just made a withAnchorPoint function to make transform working with anchor point easier in react-native. https://github.com/sueLan/react-native-anchor-point.
You can use it like this:
import { withAnchorPoint } from 'react-native-anchor-point';
getTransform = () => {
let transform = {
transform: [{ perspective: 400 }, { rotateX: rotateValue }],
};
return withAnchorPoint(transform, { x: 0.5, y: 0 }, { width: CARD_WIDTH, height: CARD_HEIGHT });
};
<Animated.View style={[styles.blockBlue, this.getTransform()]} />
I also wrote an article for it.

So I finally found a way and formular to calculate the offset. The desired point of rotation gets obviously rotated as well, when applying an angle to the object. So what I had to do was getting the position of this point (lets call it pRotated) by using rotationmatrices with the position of both the center (m) of the object and the desired point of rotation (p) like this:
angle = alpha * (Math.PI / 180)
pRotated.x = m.x + (p.x - m.x) * Math.cos(angle) - (p.y - m.y) * Math.sin(angle)
pRotated.y = m.y + (p.x - m.x) * Math.sin(angle) + (p.y - m.y) * Math.cos(angle)
Then it was easy to get the offset using
translateX = pRotated.x - p.x
translateY = pRotated.y - p.y
which then can just be applied to the object, and then its rotated by alpha around point p.
I've attached a scribble for further explanation.

Related

React Native view scaling

So I'm developing a cross platform React Native app, the app is using allot of images as buttons as per design requirements that need to be given an initial height and width so that their aspect ratios are correct. From there I've built components that use these image buttons and then placed those components on the main screen. I can get things to look perfect on one screen by using tops and lefts/ rights to get the components positioned according to the design requirements that I've been given.
The problem I'm running into is now scaling this main screen for different screen sizes. I'm basically scaling the x and y via the transform property on the parent most view as such. transform: [{ scaleX: .8 }, { scaleY: .8 }] After writing a scaling function that accounts for a base height and current height this approach works for the actual size of things but my positioning is all screwy.
I know I'm going about this wrong and am starting to think that i need to rethink my approach but am stumped on how to get these components positioned correctly on each screen without having to hard code it.
Is there any way to position a view using tops and lefts/rights, lock that in place, then scale it more like an image?
First of all, try using flex as far as you can. Then when you need extra scaling for inner parts for example, you can use scale functions. I have been using a scale function based on the screen size and the pixel density, and works almost flawless so far.
import { Dimensions } from "react-native";
const { width, height } = Dimensions.get("window");
//Guideline sizes are based on standard ~5" screen mobile device
const guidelineBaseWidth = 350;
const guidelineBaseHeight = 680;
const screenSize = Math.sqrt(width * height) / 100;
const scale = size => (width / guidelineBaseWidth) * size;
const verticalScale = size => (height / guidelineBaseHeight) * size;
const moderateScale = (size, factor = 0.5) =>
size + (scale(size) - size) * factor;
export { scale, verticalScale, moderateScale, screenSize };
Then you can import these and use in your components. There are different types of scales, you can try and see the best one for your components.Hope that helps.
I ended up going through each view and converting everything that was using a hard coded height and width pixel to setting the width and then using the property aspectRatio and giving that the hard coded height and widths. That along with implementing a scaling function that gave me a fraction of the largest view, so say .9, and then scaling the main view using transform. People arent kidding when they say this responsive ui stuff is tough.
2022 update -
I resolved this problem on my next app by using flex everywhere & a function called rem that I use everywhere that needs a fixed pixel count. With this I can set the width on an image and define an aspect ratio based on the images original dimensions and get an image that scales to the screen size, it's been super reliable.
static width = Dimensions.get("window").width;
static height = Dimensions.get("window").height;
static orientation = 'PORTRAIT';
static maxWidth = 428;
static rem = size => {
let divisor = window.lockedToPortrait || Styles.orientation === 'PORTRAIT' ? Styles.width : Styles.height;
return Math.floor(size * (divisor / Styles.maxWidth))
};
The maxWidth is a predefined value from the largest device I could find to simulate which was probably an iPhone max.

How to pinch zoom into a certain location in OpenGL ES 2?

There are other posts on Stack Overflow on pinch zooming, but I haven't found any helpful ones for OpenGL that do what I'm looking for. I am currently using the orthoM function to change the camera position and to do scaling in OpenGL. I have gotten the camera to move around, and have gotten pinch zooming to work, but the zooming always zooms into the center of the OpenGL surface view coordinate system at 0,0. After trying different things, I haven't found a way yet that allows the camera to move around, while also allowing pinch zooming to the user's touch point (as an example, the touch controls in Clash of Clans is similar to what I am trying to make).
(The method I'm currently using to get the scale value is based on this post.)
My first attempt:
// mX and mY are the movement offsets based on the user's touch movements,
// and can be positive or negative
Matrix.orthoM(mProjectionMatrix, 0, ((-WIDTH/2f)+mX)*scale, ((WIDTH/2f)+mX)*scale,
((-HEIGHT/2f)+mY)*scale, ((HEIGHT/2f)+mY)*scale, 1f, 2f);
In the above code, I realize that the camera moves towards the coordinate 0,0 because as scale gets increasingly smaller, the values for the camera edges decrease towards 0. So although the zoom goes towards the coordinate system center, the movement of the camera moves at the right speeds at any scale level.
So, I then edited the code to this:
Matrix.orthoM(mProjectionMatrix, 0, (-WIDTH/2f)*scale+mX, (WIDTH/2f)*scale+mX,
(-HEIGHT/2f)*scale+mY, (HEIGHT/2f)*scale+mY, 1f, 2f);
The edited code now makes the zoom go toward the center of the screen no matter where in the surface view coordinate system the camera is (although that isn't the full goal), but the camera movement is off, as the offset isn't adjusted for the different scale levels.
I'm still working to find a solution myself, but if anyone has any advice or ideas on how this could be implemented, I would be glad to hear.
Note, I don't think it matters, but I'm doing this in Android and using Java.
EDIT:
Since I first posted this question, I have made some changes to my code. I found this post, which explains the logic of how to pan the camera to the correct position based on the scale, so that the zoompoint remains in the same position.
My updated attempt:
// Only do the following if-block if two fingers are on the screen
if (zooming) {
// midPoint is a PointF object that stores the coordinate of the midpoint between
//two fingers
float scaleChange = scale - prevScale; // scale is the same as in my previous code
float offsetX = -(midPoint.x*scaleChange);
float offsetY = -(midPoint.y*scaleChange);
cameraPos.x += offsetX;
cameraPos.y += offsetY;
}
// cameraPos is a PointF object that stores the coordinate at the center of the screen,
// and replaces the previous values mX and mY
left = cameraPos.x-(WIDTH/2f)*scale;
right = cameraPos.x+(WIDTH/2f)*scale;
bottom = cameraPos.y-(HEIGHT/2f)*scale;
top = cameraPos.y+(HEIGHT/2f)*scale;
Matrix.orthoM(mProjectionMatrix, 0, left, right, bottom, top, 1f, 2f);
The code does work quite a bit better now, but it still isn't completely accurate. I tested how the code worked when panning was disabled, and the zooming worked sort of better. However, when the panning is enabled, the zooming doesn't focus in on the zoompoint at all.
I finally found a solution while working on another project, so I'll post (in simplest form possible) what worked for me in case this could help anyone by chance.
final float currentPointersDistance = this.calculateDistance(pointer1CurrentX, pointer1CurrentY, pointer2CurrentX, pointer2CurrentY);
final float zoomFactorMultiplier = currentPointersDistance/initialPointerDistance; //> Get an initial distance between two pointers before calling this
final float newZoomFactor = previousZoomFactor*zoomFactorMultiplier;
final float zoomFactorChange = newZoomFactor-previousZoomFactor; //> previousZoomFactor is the current value of the zoom
//> The x and y values of the variables are in scene coordinate form (not surface)
final float distanceFromCenterToMidpointX = camera.getCenterX()-currentPointersMidpointX;
final float distanceFromCenterToMidpointY = camera.getCenterY()-currentPointersMidpointY;
final float offsetX = -(distanceFromCenterToMidpointX*zoomFactorChange/newZoomFactor);
final float offsetY = -(distanceFromCenterToMidpointY*zoomFactorChange/newZoomFactor);
camera.setZoomFactor(newZoomFactor);
camera.translate(offsetX, offsetY);
initialPointerDistance = currentPointersDistance; //> Make sure to do this
Method used to calculate the distance between two pointers:
public float calculateDistance(float pX1, float pY1, float pX2, float pY2) {
float x = pX2-pX1;
float y = pY2-pY1;
return (float)Math.sqrt((x*x)+(y*y));
}
Camera class methods used above:
public float getXMin() {
return centerX-((centerX-xMin)/zoomFactor);
}
public float getYMin() {
return centerY-((centerY-yMin)/zoomFactor);
}
public float getXMax() {
return centerX+((xMax-centerX)/zoomFactor);
}
public float getYMax() {
return centerY+((yMax-centerY)/zoomFactor);
}
public void setZoomFactor(float pZoomFactor) {
zoomFactor = pZoomFactor;
}
public void translate(float pX, float pY) {
xMin += pX;
yMin += pY;
xMax += pX;
yMax += pY;
}
The orthoM() function is called like the following:
Matrix.orthoM(projectionMatrix, 0, camera.getXMin(), camera.getXMax(), camera.getYMin(), camera.getYMax(), near, far);

How can I detect if the touch is over the border of a View from PanResponder?

I'm working on a Image cropper, I have somehow figure out how to move the cropper even it's not a perfect solution but it works, but now I want to respond to the border touches/moves of a given View
I'm using this module for the corpping, but actually I'm still stuck at how to respond to border touches/moves
gestureState parameter is sufficient for the task.
x0 and y0 are the top-left coordinates of the responder view, further, moveX and moveY holds the current coordinates of touch.
So moveX === x0 means current touch is at left edge.
Similarly moveY === y0 means current touch is at top edge.
For handling right and bottom edge I suggest you use onLayout in the <View> tag and assign height and width of the view to some variable or in state variable( take care of performances optimisations)
And then use it in similar way:
onPanresponderMove(evt, {x0, y0, moveX, moveY) {
...
if(moveX=== x0 || moveX === x0 + this.state._currentWidth) {
// task for left and right edge response
...
}
...
}
To get view width:
<View {...this._myResponder.panHandlers}
onLayout={ ({width, height}) => this.state._currentWidth = width } />

THREEJS: Rotating the camera while lookingAt

I have a moving camera in camera container which flies arond the scene on giving paths like an airplane; so it can move to any position x,y,z positive and negative. The camera container is looking at his own future path using a spline curve.
Now I want to rotate the camera using the mouse direction but still keeping the general looking at position while moving forward with object. You could say, i want to turn my head on my body: while moving the body having the general looking at direction, i am turning my head around to lets say 220 degree up and down. So i can't look behind my body.
In my code the cameraContainer is responsible to move on a pline curve and to lookAt the moving direction. The camera is added as a child to the cameraContainer responsible for the rotation using the mouse.
What i don't get working properly is the rotation of the camera. I guess its a very common problem. Lets say the camera when moving only on x-axes moves not straight, it moves like a curve. Specially in different camera positions, the rotation seems very different. I was tryiing to use the cameraContainer to avoid this problem, but the problem seems nothing related to the world coordinates.
Here is what i have:
// camera is in a container
cameraContainer = new THREEJS.Object3D();
cameraContainer.add(camera);
camera.lookAt(0,0,1);
cameraContainer.lookAt(nextPositionOnSplineCurve);
// Here goes the rotation depending on mouse
// Vertical
var mouseVerti = 1; // 1 = top, -1 = bottom
if(window.App4D.mouse.y <= instance.domCenterPos.y) // mouse is bottom?
mouseVerti = -1;
// how far is the mouse away from center: 1 most, 0 near
var yMousePerc = Math.abs(Math.ceil((instance.domCenterPos.y - window.App4D.mouse.y) / (instance.domCenterPos.y - instance.domBoundingBox.bottom) * 100) / 100);
var yAngleDiffSide = (instance.config.scene.camera.maxAngle - instance.config.scene.camera.angle) / 2;
var yRotateRan = mouseVerti * yAngleDiffSide * yMousePerc * Math.PI / 180;
instance.camera.rotation.x += yRotateRan; // rotation x = vertical
// Horizontal
var mouseHori = 1; // 1 = top, -1 = bottom
if(window.App4D.mouse.x <= instance.domCenterPos.x) // mouse is left?
mouseHori = -1;
// how far is the mouse away from center: 1 most, 0 near
var xMousePerc = Math.abs(Math.ceil((instance.domCenterPos.x - window.App4D.mouse.x) / (instance.domCenterPos.x - instance.domBoundingBox.right) * 100) / 100);
var xAngleDiffSide = (instance.config.scene.camera.maxAngle - instance.config.scene.camera.angle) / 2;
var xRotateRan = mouseHori * xAngleDiffSide * xMousePerc * Math.PI / 180;
instance.camera.rotation.y += xRotateRan; // rotation y = horizontal
Would be really thankful if someone can give me a hint!
I got the answer after some more trial and error. The solution is to simply take the initial rotation of y in consideration.
When setting up the camera container and the real camera as child of the container, i had to point the camera to the frontface of the camera container object, in order to let the camera looking in the right direction. That lead to the initial rotation of 0, 3.14, 0 (x,y,z). The solution was to added 3.14 to the y rotation everytime i assigned (as mentioned by WestLangley) the mouse rotation.
cameraReal.lookAt(new THREE.Vector3(0,0,1));
cameraReal.rotation.y = xRotateRan + 3.14;

dojox.drawing.Drawing - custom tool to create rectangle with rounded corners

I'm working with dojox.drawing.Drawing to create a simple diagramming tool. I have created a custom tool to draw rounded rectangle by extending dojox.drawing.tools.Rect as shown below -
dojo.provide("dojox.drawing.tools.custom.RoundedRect");
dojo.require("dojox.drawing.tools.Rect");
dojox.drawing.tools.custom.RoundedRect = dojox.drawing.util.oo.declare(
dojox.drawing.tools.Rect,
function(options){
},
{
customType:"roundedrect"
}
);
dojox.drawing.tools.custom.RoundedRect.setup = {
name:"dojox.drawing.tools.custom.RoundedRect",
tooltip:"Rounded Rect",
iconClass:"iconRounded"
};
dojox.drawing.register(dojox.drawing.tools.custom.RoundedRect.setup, "tool");
I was able to add my tool to the toolbar and use it to draw a rectagle on canvas. Now, I would like to customize the rectangle created by my custom tool to have rounded corners, but I'm not able to figure out how.
I have checked the source of dojox.drawing.tools.Rect class as well as it's parent dojox.drawing.stencil.Rect class and I can see the actual rectangle being created in dojox.drawing.stencil.Rect as follows -
_create: function(/*String*/shp, /*StencilData*/d, /*Object*/sty){
// summary:
// Creates a dojox.gfx.shape based on passed arguments.
// Can be called many times by implementation to create
// multiple shapes in one stencil.
//
//console.log("render rect", d)
//console.log("rect sty:", sty)
this.remove(this[shp]);
this[shp] = this.container.createRect(d)
.setStroke(sty)
.setFill(sty.fill);
this._setNodeAtts(this[shp]);
}
In dojox.gfx, rounded corners can be added to a a rectangle by setting r property.
With this context, could anybody please provide answers to my following questions?
What's the mechanism in dojox.drawing to customize the appearance of rectangle to have
rounded corners?
In the code snippet above, StencilData is passed to createRect call. What's the mechanism to customize this data? Can the r property of a rectangle that governs rounded corners be set in this data?
Adding rounded rectangles programmatically is easy. In the tests folder you'll find test_shadows.html which has a line that adds a rectangle with rounded corners:
myDrawing.addStencil("rect", {data:{x:50, y:175, width:100, height:50, r:10}});
You create a data object with x,y,width,height, and a value for r (otherwise it defaults to 0).
If you wanted to do it by extending rect, the easiest way to do it would just be to set the value in the constructor function (data.r=10, for example), or you could create a pointsToData function to override Rect's version. Either you would have set the value for this.data.r, or the default:
pointsToData: function(/*Array*/p){
// summary:
// Converts points to data
p = p || this.points;
var s = p[0];
var e = p[2];
this.data = {
x: s.x,
y: s.y,
width: e.x-s.x,
height: e.y-s.y,
r:this.data.r || 10
};
return this.data;
},
In that example I give r the value 10 as the default, instead of 0 as it was before. This works because every time stencil goes to draw your rect, it converts canvas x,y points (all stencils remember their points) to data (which gfx uses to draw). In other words this function will always be called before rect renders.