How to mirror a sprite in PIXI.js - mirroring

I need to flip a sprite by its bottom line. I know theres a way to flip it by a scale property. But that would be only a flip my its center. Searching the Pixi.js documentation wasn't helpful either.

I have no experience with Pixi, but from what I've read, you have to set the anchor property of your sprite to the bottom of the sprite, then scale by -1. Something like this; please note I didn't test it.
mySprite.anchor.y = 1; /* 0 = top, 0.5 = center, 1 = bottom */
mySprite.scale.y *= -1; /* flip vertically */

Related

How to pinch zoom into a certain location in OpenGL ES 2?

There are other posts on Stack Overflow on pinch zooming, but I haven't found any helpful ones for OpenGL that do what I'm looking for. I am currently using the orthoM function to change the camera position and to do scaling in OpenGL. I have gotten the camera to move around, and have gotten pinch zooming to work, but the zooming always zooms into the center of the OpenGL surface view coordinate system at 0,0. After trying different things, I haven't found a way yet that allows the camera to move around, while also allowing pinch zooming to the user's touch point (as an example, the touch controls in Clash of Clans is similar to what I am trying to make).
(The method I'm currently using to get the scale value is based on this post.)
My first attempt:
// mX and mY are the movement offsets based on the user's touch movements,
// and can be positive or negative
Matrix.orthoM(mProjectionMatrix, 0, ((-WIDTH/2f)+mX)*scale, ((WIDTH/2f)+mX)*scale,
((-HEIGHT/2f)+mY)*scale, ((HEIGHT/2f)+mY)*scale, 1f, 2f);
In the above code, I realize that the camera moves towards the coordinate 0,0 because as scale gets increasingly smaller, the values for the camera edges decrease towards 0. So although the zoom goes towards the coordinate system center, the movement of the camera moves at the right speeds at any scale level.
So, I then edited the code to this:
Matrix.orthoM(mProjectionMatrix, 0, (-WIDTH/2f)*scale+mX, (WIDTH/2f)*scale+mX,
(-HEIGHT/2f)*scale+mY, (HEIGHT/2f)*scale+mY, 1f, 2f);
The edited code now makes the zoom go toward the center of the screen no matter where in the surface view coordinate system the camera is (although that isn't the full goal), but the camera movement is off, as the offset isn't adjusted for the different scale levels.
I'm still working to find a solution myself, but if anyone has any advice or ideas on how this could be implemented, I would be glad to hear.
Note, I don't think it matters, but I'm doing this in Android and using Java.
EDIT:
Since I first posted this question, I have made some changes to my code. I found this post, which explains the logic of how to pan the camera to the correct position based on the scale, so that the zoompoint remains in the same position.
My updated attempt:
// Only do the following if-block if two fingers are on the screen
if (zooming) {
// midPoint is a PointF object that stores the coordinate of the midpoint between
//two fingers
float scaleChange = scale - prevScale; // scale is the same as in my previous code
float offsetX = -(midPoint.x*scaleChange);
float offsetY = -(midPoint.y*scaleChange);
cameraPos.x += offsetX;
cameraPos.y += offsetY;
}
// cameraPos is a PointF object that stores the coordinate at the center of the screen,
// and replaces the previous values mX and mY
left = cameraPos.x-(WIDTH/2f)*scale;
right = cameraPos.x+(WIDTH/2f)*scale;
bottom = cameraPos.y-(HEIGHT/2f)*scale;
top = cameraPos.y+(HEIGHT/2f)*scale;
Matrix.orthoM(mProjectionMatrix, 0, left, right, bottom, top, 1f, 2f);
The code does work quite a bit better now, but it still isn't completely accurate. I tested how the code worked when panning was disabled, and the zooming worked sort of better. However, when the panning is enabled, the zooming doesn't focus in on the zoompoint at all.
I finally found a solution while working on another project, so I'll post (in simplest form possible) what worked for me in case this could help anyone by chance.
final float currentPointersDistance = this.calculateDistance(pointer1CurrentX, pointer1CurrentY, pointer2CurrentX, pointer2CurrentY);
final float zoomFactorMultiplier = currentPointersDistance/initialPointerDistance; //> Get an initial distance between two pointers before calling this
final float newZoomFactor = previousZoomFactor*zoomFactorMultiplier;
final float zoomFactorChange = newZoomFactor-previousZoomFactor; //> previousZoomFactor is the current value of the zoom
//> The x and y values of the variables are in scene coordinate form (not surface)
final float distanceFromCenterToMidpointX = camera.getCenterX()-currentPointersMidpointX;
final float distanceFromCenterToMidpointY = camera.getCenterY()-currentPointersMidpointY;
final float offsetX = -(distanceFromCenterToMidpointX*zoomFactorChange/newZoomFactor);
final float offsetY = -(distanceFromCenterToMidpointY*zoomFactorChange/newZoomFactor);
camera.setZoomFactor(newZoomFactor);
camera.translate(offsetX, offsetY);
initialPointerDistance = currentPointersDistance; //> Make sure to do this
Method used to calculate the distance between two pointers:
public float calculateDistance(float pX1, float pY1, float pX2, float pY2) {
float x = pX2-pX1;
float y = pY2-pY1;
return (float)Math.sqrt((x*x)+(y*y));
}
Camera class methods used above:
public float getXMin() {
return centerX-((centerX-xMin)/zoomFactor);
}
public float getYMin() {
return centerY-((centerY-yMin)/zoomFactor);
}
public float getXMax() {
return centerX+((xMax-centerX)/zoomFactor);
}
public float getYMax() {
return centerY+((yMax-centerY)/zoomFactor);
}
public void setZoomFactor(float pZoomFactor) {
zoomFactor = pZoomFactor;
}
public void translate(float pX, float pY) {
xMin += pX;
yMin += pY;
xMax += pX;
yMax += pY;
}
The orthoM() function is called like the following:
Matrix.orthoM(projectionMatrix, 0, camera.getXMin(), camera.getXMax(), camera.getYMin(), camera.getYMax(), near, far);

THREEJS: Rotating the camera while lookingAt

I have a moving camera in camera container which flies arond the scene on giving paths like an airplane; so it can move to any position x,y,z positive and negative. The camera container is looking at his own future path using a spline curve.
Now I want to rotate the camera using the mouse direction but still keeping the general looking at position while moving forward with object. You could say, i want to turn my head on my body: while moving the body having the general looking at direction, i am turning my head around to lets say 220 degree up and down. So i can't look behind my body.
In my code the cameraContainer is responsible to move on a pline curve and to lookAt the moving direction. The camera is added as a child to the cameraContainer responsible for the rotation using the mouse.
What i don't get working properly is the rotation of the camera. I guess its a very common problem. Lets say the camera when moving only on x-axes moves not straight, it moves like a curve. Specially in different camera positions, the rotation seems very different. I was tryiing to use the cameraContainer to avoid this problem, but the problem seems nothing related to the world coordinates.
Here is what i have:
// camera is in a container
cameraContainer = new THREEJS.Object3D();
cameraContainer.add(camera);
camera.lookAt(0,0,1);
cameraContainer.lookAt(nextPositionOnSplineCurve);
// Here goes the rotation depending on mouse
// Vertical
var mouseVerti = 1; // 1 = top, -1 = bottom
if(window.App4D.mouse.y <= instance.domCenterPos.y) // mouse is bottom?
mouseVerti = -1;
// how far is the mouse away from center: 1 most, 0 near
var yMousePerc = Math.abs(Math.ceil((instance.domCenterPos.y - window.App4D.mouse.y) / (instance.domCenterPos.y - instance.domBoundingBox.bottom) * 100) / 100);
var yAngleDiffSide = (instance.config.scene.camera.maxAngle - instance.config.scene.camera.angle) / 2;
var yRotateRan = mouseVerti * yAngleDiffSide * yMousePerc * Math.PI / 180;
instance.camera.rotation.x += yRotateRan; // rotation x = vertical
// Horizontal
var mouseHori = 1; // 1 = top, -1 = bottom
if(window.App4D.mouse.x <= instance.domCenterPos.x) // mouse is left?
mouseHori = -1;
// how far is the mouse away from center: 1 most, 0 near
var xMousePerc = Math.abs(Math.ceil((instance.domCenterPos.x - window.App4D.mouse.x) / (instance.domCenterPos.x - instance.domBoundingBox.right) * 100) / 100);
var xAngleDiffSide = (instance.config.scene.camera.maxAngle - instance.config.scene.camera.angle) / 2;
var xRotateRan = mouseHori * xAngleDiffSide * xMousePerc * Math.PI / 180;
instance.camera.rotation.y += xRotateRan; // rotation y = horizontal
Would be really thankful if someone can give me a hint!
I got the answer after some more trial and error. The solution is to simply take the initial rotation of y in consideration.
When setting up the camera container and the real camera as child of the container, i had to point the camera to the frontface of the camera container object, in order to let the camera looking in the right direction. That lead to the initial rotation of 0, 3.14, 0 (x,y,z). The solution was to added 3.14 to the y rotation everytime i assigned (as mentioned by WestLangley) the mouse rotation.
cameraReal.lookAt(new THREE.Vector3(0,0,1));
cameraReal.rotation.y = xRotateRan + 3.14;

THREE.js rotating camera around an object using orbit path

I am struggling in solving this problem.
On my scene, I have a camera which looks at the center of mass of an object. I have a some buttons that enable to set camera position on particular view (front view, back view,...) along a invisible sphere that surroung the object (constant radius).
When I click on the button, i would like the camera to move from its start position to the end position along the sphere surface. When camera moves I would like it to keep fixing center of mass of the object.
Has anyone have a clue on how to achieve this?
Thanks for help!
If you are happy/prefer to use basic trigonometry then in your initialisation section you could do this:
var cameraAngle = 0;
var orbitRange = 100;
var orbitSpeed = 2 * Math.PI/180;
var desiredAngle = 90 * Math.PI/180;
...
camera.position.set(orbitRange,0,0);
camera.lookAt(myObject.position);
Then in your render/animate section you could do this:
if (cameraAngle == desiredAngle) { orbitSpeed = 0; }
else {
cameraAngle += orbitSpeed;
camera.position.x = Math.cos(cameraAngle) * orbitRange;
camera.position.y = Math.sin(cameraAngle) * orbitRange;
}
Of course, your buttons would modify what the desiredAngle was (0°, 90°, 180° or 270° presumably), you need to rotate around the correct plane (I am rotating around the XY plane above), and you can play with the orbitRange and orbitSpeed until you hare happy.
You can also modify orbitSpeed as it moves along the orbit path, speeding up and slowing down at various cameraAngles for a smoother ride. This process is called 'tweening' and you could search on 'tween' or 'tweening' if you want to know more. I think Three.js has tweening support but have never looked into it.
Oh, also remember to set your camera's far property to be greater than orbitRadius or you will only see the front half of your object and, depending on what it is, that might look weird.

Particles inside a moving Box2d world are getting drawn on top instead of inside a layer

I'm using LevelHelper to build my level, and I'm adding some particles (dynamically initialized CCParticleSystemQuad's) inside my level. All works fine until I move the world (it's a dynamically drawn world in Box2D in which I follow the player with the camera). If I move the world, newly added particles,which are emitting continuously, are drawn at the right position but in the particle-animation afterwards the particles seem to be drawn relatively of the global world/screen position. This gives a weird 'trippy' effect which looks totally unrealistic. The particles should be redrawn/refreshed inside the world
LevelHelperLoader * lh = gameLayer.lh;
LHLayer * layer = [lh layerWithUniqueName:#"MAIN_LAYER"];
NSArray * array = [lh spritesWithTag:WORTEL];
CCParticleSystemQuad * particle;
CGPoint position;
for (LHSprite * sprite in array) {
particle = [CCParticleSystemQuad particleWithFile:#"DirtParticles.plist"];
[layer addChild:particle z:0];
position = sprite.position;
position.y += sprite.contentSize.height * 0.5f;
[particle setPosition:position];
[particle resetSystem];
}
Does anybody know what I might be doing wrong?
Try changing the particle position type:
particle.positionType = kCCPositionTypeFree;
The alternatives are kCCPositionTypeRelative and kCCPositionTypeGrouped. You may have to try all to see which of them best fits your scenario, I'm guessing it's either "free" or "relative".

Why do my raytraced spheres have dark lines when lit with multiple light sources?

I have a simple raytracer that only works back up to the first intersection. The scene looks OK with two different light sources, but when both lights are in the scene, there are dark shadows where the lit area from one ends, even if in the middle of a lit area from the other light source (particularly noticeable on the green ball). The transition from the 'area lit by both light sources' to the 'area lit by just one light source' seems to be slightly darker than the 'area lit by just one light source'.
The code where I'm adding the lighting effects is:
// trace lights
for ( int l=0; l<primitives.count; l++) {
Primitive* p = [primitives objectAtIndex:l];
if (p.light)
{
Sphere * lightSource = (Sphere *)p;
// calculate diffuse shading
Vector3 *light = [[Vector3 alloc] init];
light.x = lightSource.centre.x - intersectionPoint.x;
light.y = lightSource.centre.y - intersectionPoint.y;
light.z = lightSource.centre.z - intersectionPoint.z;
[light normalize];
Vector3 * normal = [[primitiveThatWasHit getNormalAt:intersectionPoint] retain];
if (primitiveThatWasHit.material.diffuse > 0)
{
float illumination = DOT(normal, light);
if (illumination > 0)
{
float diff = illumination * primitiveThatWasHit.material.diffuse;
// add diffuse component to ray color
colour.red += diff * primitiveThatWasHit.material.colour.red * lightSource.material.colour.red;
colour.blue += diff * primitiveThatWasHit.material.colour.blue * lightSource.material.colour.blue;
colour.green += diff * primitiveThatWasHit.material.colour.green * lightSource.material.colour.green;
}
}
[normal release];
[light release];
}
}
How can I make it look right?
It's a perceptual effect called Mach banding.
You are also very likely viewing the images in the wrong color space. Your ray tracer is doing the lighting math in a "linear" space, but then you are almost certainly viewing those images on a display with a nonlinear response, and therefore not even seeing the correct results. This could easily be making the Mach bands much more prominent than if you were displaying them properly. Try learning about gamma correction.
Your eyes are decieving you. If you move the spheres from the 3 pictures together you will very clearly see that the areas are the same color when single light and brighter when double lit. If you want to make it look nicer I suggest you add a whole arc of light sources between the current ones.
You've saturated one colour channel in the image; turn down the brightness a bit and see what happens.
Are you sure your lighting directions are both normalized?
May be worth it to throw an assert in there.