How to use CGAffineTransformConcat properly, combine scale and rotate - objective-c

I am creating view, then rotating them and flipping them. Seems to work to use CGAffineTransformConcat to combing the scales and rotation to get the result I expect, but I am running into some trouble.
Initially I set up a transform that flips the view vertically.
CGAffineTransform flipViewTrans;
viewFlipTrans = CGAffineTransformMakeScale(1, -1);
selectedView.transform = CGAffineTransformConcat(selectedView.transform, flipViewTrans);
When I rotate the view I concat the original flip transform and this seems to preserve this flip transform exactly as expected through the rotation.
-(IBAction)rotateView:(UISlider*)sender {
float angle = sender.value;
selectedView.rotation = angle;
[selectedView setTransform:CGAffineTransformMakeRotation(angle)];
selectedView.transform = CGAffineTransformConcat(selectedView.transform, flipViewTrans);
}
The view flips fine too.
-(IBAction)flipSickerH:(GradientButton*)sender {
viewFlipTrans = CGAffineTransformMakeScale(-1, 1);
selectedView.transform = CGAffineTransformConcat(selectedView.transform, viewFlipTrans);
}
The problem is when I rotate after doing the flip as above, the results do not come out as expected. The view flips in the opposite direction suddenly.
What is happening, I think, is that the concatenated transforms are reset, so the only transform is the rotation. So the concatenation in the rotate code is failing in this case maybe.
Any clues as to what I need to do?

Related

Tile Collision In GML

I am making a game based on the game AZ on the website Y8, and I am having problems with tile collisions.
the player moves basically by giving it speed when up is pressed, then rotating left or right.
direction = image_angle;
if(keyForward)
{
speed = 2;
}
else speed = 0;
// rotate
if(keyRotateLeft)
{
image_angle = image_angle + 5;
}
if(keyRotateRight)
{
image_angle = image_angle - 5;
}
then I said when the player collides with the tile speed = 0. But the player gets stuck and can't move anymore. is there a better way to do this.
A simple approach would be as following:
Attempt to rotate
Check if you are now stuck in a wall
If you are, undo the rotation.
A more advanced approach would be to attempt pushing the player out of solids while rotating.
Alternatively, you may be able to get away with giving the player a circular mask and not rotating the actual mask (using a user-defined variable instead of image_angle).

How to pinch zoom into a certain location in OpenGL ES 2?

There are other posts on Stack Overflow on pinch zooming, but I haven't found any helpful ones for OpenGL that do what I'm looking for. I am currently using the orthoM function to change the camera position and to do scaling in OpenGL. I have gotten the camera to move around, and have gotten pinch zooming to work, but the zooming always zooms into the center of the OpenGL surface view coordinate system at 0,0. After trying different things, I haven't found a way yet that allows the camera to move around, while also allowing pinch zooming to the user's touch point (as an example, the touch controls in Clash of Clans is similar to what I am trying to make).
(The method I'm currently using to get the scale value is based on this post.)
My first attempt:
// mX and mY are the movement offsets based on the user's touch movements,
// and can be positive or negative
Matrix.orthoM(mProjectionMatrix, 0, ((-WIDTH/2f)+mX)*scale, ((WIDTH/2f)+mX)*scale,
((-HEIGHT/2f)+mY)*scale, ((HEIGHT/2f)+mY)*scale, 1f, 2f);
In the above code, I realize that the camera moves towards the coordinate 0,0 because as scale gets increasingly smaller, the values for the camera edges decrease towards 0. So although the zoom goes towards the coordinate system center, the movement of the camera moves at the right speeds at any scale level.
So, I then edited the code to this:
Matrix.orthoM(mProjectionMatrix, 0, (-WIDTH/2f)*scale+mX, (WIDTH/2f)*scale+mX,
(-HEIGHT/2f)*scale+mY, (HEIGHT/2f)*scale+mY, 1f, 2f);
The edited code now makes the zoom go toward the center of the screen no matter where in the surface view coordinate system the camera is (although that isn't the full goal), but the camera movement is off, as the offset isn't adjusted for the different scale levels.
I'm still working to find a solution myself, but if anyone has any advice or ideas on how this could be implemented, I would be glad to hear.
Note, I don't think it matters, but I'm doing this in Android and using Java.
EDIT:
Since I first posted this question, I have made some changes to my code. I found this post, which explains the logic of how to pan the camera to the correct position based on the scale, so that the zoompoint remains in the same position.
My updated attempt:
// Only do the following if-block if two fingers are on the screen
if (zooming) {
// midPoint is a PointF object that stores the coordinate of the midpoint between
//two fingers
float scaleChange = scale - prevScale; // scale is the same as in my previous code
float offsetX = -(midPoint.x*scaleChange);
float offsetY = -(midPoint.y*scaleChange);
cameraPos.x += offsetX;
cameraPos.y += offsetY;
}
// cameraPos is a PointF object that stores the coordinate at the center of the screen,
// and replaces the previous values mX and mY
left = cameraPos.x-(WIDTH/2f)*scale;
right = cameraPos.x+(WIDTH/2f)*scale;
bottom = cameraPos.y-(HEIGHT/2f)*scale;
top = cameraPos.y+(HEIGHT/2f)*scale;
Matrix.orthoM(mProjectionMatrix, 0, left, right, bottom, top, 1f, 2f);
The code does work quite a bit better now, but it still isn't completely accurate. I tested how the code worked when panning was disabled, and the zooming worked sort of better. However, when the panning is enabled, the zooming doesn't focus in on the zoompoint at all.
I finally found a solution while working on another project, so I'll post (in simplest form possible) what worked for me in case this could help anyone by chance.
final float currentPointersDistance = this.calculateDistance(pointer1CurrentX, pointer1CurrentY, pointer2CurrentX, pointer2CurrentY);
final float zoomFactorMultiplier = currentPointersDistance/initialPointerDistance; //> Get an initial distance between two pointers before calling this
final float newZoomFactor = previousZoomFactor*zoomFactorMultiplier;
final float zoomFactorChange = newZoomFactor-previousZoomFactor; //> previousZoomFactor is the current value of the zoom
//> The x and y values of the variables are in scene coordinate form (not surface)
final float distanceFromCenterToMidpointX = camera.getCenterX()-currentPointersMidpointX;
final float distanceFromCenterToMidpointY = camera.getCenterY()-currentPointersMidpointY;
final float offsetX = -(distanceFromCenterToMidpointX*zoomFactorChange/newZoomFactor);
final float offsetY = -(distanceFromCenterToMidpointY*zoomFactorChange/newZoomFactor);
camera.setZoomFactor(newZoomFactor);
camera.translate(offsetX, offsetY);
initialPointerDistance = currentPointersDistance; //> Make sure to do this
Method used to calculate the distance between two pointers:
public float calculateDistance(float pX1, float pY1, float pX2, float pY2) {
float x = pX2-pX1;
float y = pY2-pY1;
return (float)Math.sqrt((x*x)+(y*y));
}
Camera class methods used above:
public float getXMin() {
return centerX-((centerX-xMin)/zoomFactor);
}
public float getYMin() {
return centerY-((centerY-yMin)/zoomFactor);
}
public float getXMax() {
return centerX+((xMax-centerX)/zoomFactor);
}
public float getYMax() {
return centerY+((yMax-centerY)/zoomFactor);
}
public void setZoomFactor(float pZoomFactor) {
zoomFactor = pZoomFactor;
}
public void translate(float pX, float pY) {
xMin += pX;
yMin += pY;
xMax += pX;
yMax += pY;
}
The orthoM() function is called like the following:
Matrix.orthoM(projectionMatrix, 0, camera.getXMin(), camera.getXMax(), camera.getYMin(), camera.getYMax(), near, far);

THREE.js rotating camera around an object using orbit path

I am struggling in solving this problem.
On my scene, I have a camera which looks at the center of mass of an object. I have a some buttons that enable to set camera position on particular view (front view, back view,...) along a invisible sphere that surroung the object (constant radius).
When I click on the button, i would like the camera to move from its start position to the end position along the sphere surface. When camera moves I would like it to keep fixing center of mass of the object.
Has anyone have a clue on how to achieve this?
Thanks for help!
If you are happy/prefer to use basic trigonometry then in your initialisation section you could do this:
var cameraAngle = 0;
var orbitRange = 100;
var orbitSpeed = 2 * Math.PI/180;
var desiredAngle = 90 * Math.PI/180;
...
camera.position.set(orbitRange,0,0);
camera.lookAt(myObject.position);
Then in your render/animate section you could do this:
if (cameraAngle == desiredAngle) { orbitSpeed = 0; }
else {
cameraAngle += orbitSpeed;
camera.position.x = Math.cos(cameraAngle) * orbitRange;
camera.position.y = Math.sin(cameraAngle) * orbitRange;
}
Of course, your buttons would modify what the desiredAngle was (0°, 90°, 180° or 270° presumably), you need to rotate around the correct plane (I am rotating around the XY plane above), and you can play with the orbitRange and orbitSpeed until you hare happy.
You can also modify orbitSpeed as it moves along the orbit path, speeding up and slowing down at various cameraAngles for a smoother ride. This process is called 'tweening' and you could search on 'tween' or 'tweening' if you want to know more. I think Three.js has tweening support but have never looked into it.
Oh, also remember to set your camera's far property to be greater than orbitRadius or you will only see the front half of your object and, depending on what it is, that might look weird.

Particles inside a moving Box2d world are getting drawn on top instead of inside a layer

I'm using LevelHelper to build my level, and I'm adding some particles (dynamically initialized CCParticleSystemQuad's) inside my level. All works fine until I move the world (it's a dynamically drawn world in Box2D in which I follow the player with the camera). If I move the world, newly added particles,which are emitting continuously, are drawn at the right position but in the particle-animation afterwards the particles seem to be drawn relatively of the global world/screen position. This gives a weird 'trippy' effect which looks totally unrealistic. The particles should be redrawn/refreshed inside the world
LevelHelperLoader * lh = gameLayer.lh;
LHLayer * layer = [lh layerWithUniqueName:#"MAIN_LAYER"];
NSArray * array = [lh spritesWithTag:WORTEL];
CCParticleSystemQuad * particle;
CGPoint position;
for (LHSprite * sprite in array) {
particle = [CCParticleSystemQuad particleWithFile:#"DirtParticles.plist"];
[layer addChild:particle z:0];
position = sprite.position;
position.y += sprite.contentSize.height * 0.5f;
[particle setPosition:position];
[particle resetSystem];
}
Does anybody know what I might be doing wrong?
Try changing the particle position type:
particle.positionType = kCCPositionTypeFree;
The alternatives are kCCPositionTypeRelative and kCCPositionTypeGrouped. You may have to try all to see which of them best fits your scenario, I'm guessing it's either "free" or "relative".

Why do my raytraced spheres have dark lines when lit with multiple light sources?

I have a simple raytracer that only works back up to the first intersection. The scene looks OK with two different light sources, but when both lights are in the scene, there are dark shadows where the lit area from one ends, even if in the middle of a lit area from the other light source (particularly noticeable on the green ball). The transition from the 'area lit by both light sources' to the 'area lit by just one light source' seems to be slightly darker than the 'area lit by just one light source'.
The code where I'm adding the lighting effects is:
// trace lights
for ( int l=0; l<primitives.count; l++) {
Primitive* p = [primitives objectAtIndex:l];
if (p.light)
{
Sphere * lightSource = (Sphere *)p;
// calculate diffuse shading
Vector3 *light = [[Vector3 alloc] init];
light.x = lightSource.centre.x - intersectionPoint.x;
light.y = lightSource.centre.y - intersectionPoint.y;
light.z = lightSource.centre.z - intersectionPoint.z;
[light normalize];
Vector3 * normal = [[primitiveThatWasHit getNormalAt:intersectionPoint] retain];
if (primitiveThatWasHit.material.diffuse > 0)
{
float illumination = DOT(normal, light);
if (illumination > 0)
{
float diff = illumination * primitiveThatWasHit.material.diffuse;
// add diffuse component to ray color
colour.red += diff * primitiveThatWasHit.material.colour.red * lightSource.material.colour.red;
colour.blue += diff * primitiveThatWasHit.material.colour.blue * lightSource.material.colour.blue;
colour.green += diff * primitiveThatWasHit.material.colour.green * lightSource.material.colour.green;
}
}
[normal release];
[light release];
}
}
How can I make it look right?
It's a perceptual effect called Mach banding.
You are also very likely viewing the images in the wrong color space. Your ray tracer is doing the lighting math in a "linear" space, but then you are almost certainly viewing those images on a display with a nonlinear response, and therefore not even seeing the correct results. This could easily be making the Mach bands much more prominent than if you were displaying them properly. Try learning about gamma correction.
Your eyes are decieving you. If you move the spheres from the 3 pictures together you will very clearly see that the areas are the same color when single light and brighter when double lit. If you want to make it look nicer I suggest you add a whole arc of light sources between the current ones.
You've saturated one colour channel in the image; turn down the brightness a bit and see what happens.
Are you sure your lighting directions are both normalized?
May be worth it to throw an assert in there.