Hey guys I'm working on a android game with libGDX, and I got a zoom problem here. It's basically a ski safari style 2D game, and I want to implement a zoom in/out effect as the height changing. Is that possible to work this out with OrthoGraphicCamera? Or shall I change the size of the objects in real time(because I still wanna keep the background in a fixed size)?
if Camera.zoom not solve your problem, you can use
batch unmodified to draw your backgroud, and use batch.getProjectionMatrix().cpy().scale (yourScaleVariableX, yourScaleVariableY, 0);
to simulate only the items that you want, varying varibles, not if you're looking hopefully help.
simple example:
variable Class
Matrix4 testMatrix;
float yourScaleVariableX;
float yourScaleVariableY;
example render method
.//
batch.begin();
yourBackground.draw...
batch.end();
batch.begin();
testMatrix = batch.getProjectionMatrix().cpy().scale (yourScaleVariable, yourScaleVariable, 0);
batch.setProjectionMatrix(testMatrix);
yourItem.draw..
batch.end();
I think it is more efficient, changing the matrix, which resize all objects.
I hope to explain well.
Edit
I did not realize when I wrote the answer, you can store save the original matrix, before editing it for later use, or use batch.setProjectionMatrix (camera.combined); to restore
Related
For a program that I'm making in vb.net, I need to have a rectangle, with an image displayed on it, rotate and move around the screen. It needs to move quickly and responsively, so I'm using the standard RectangleShape. The problem here is that vb.net apparently has no in-built function to rotate this rectangle. I'm not really able to use the corresponding Graphics equivalent with the FillRectangle, as it's incredibly laggy on the computer I'm using for this- since it requires constant DrawImage functions for separate bitmaps.
So, is there a way to have a Rectangle that can:
Hold an image
Be rotated
Be moved around the stage in a very cpu-unintensive manner
Thank you
Dim mxRotate As New Matrix()
'75 being the arbitrary number I picked to rotate by
mxRotate.Rotate(75, MatrixOrder.Append)
e.Graphics.Transform = mxRotate
e.Graphics.DrawRectangle(YourPen, YourRect)
This can probably help you rotate the image: How to rotate JPEG using Graphics.RotateTransform without clipping
As for performance. I'd imagine all of this will use a little bit of the CPU. Alternatively, you can use DirectX or OpenGL for rendering if that's an option.
I am having strange artifacts on a tiledmap while scrolling with the camera clamped on the player (who is a box2d-Body).
Before getting this issue i used the linear filter for the tiledmap which prevents those strange artifacts from happening but results in Texture bleeding (i loaded the tiledmap straight from a .tmx file without padding the tiles).
However now i am using the Nearest filter instead which gets rid of the bleeding but when scrolling the map (by walking the character with the cam clamped on him) it seams like a lot of pixel are flickering around. The flickering results can get better or worse depending on the cameras zoom value.
But when I use the "OrthoCamController" class from the libgdx-Utilities which allows to scroll the map by panning with the mouse/finger i don't get these artifacts at all.
I assume that the flickering might be caused by bad camera-position values received by the box2d-Body's position.
One more thing i should add here: The game instance runs in 1280*720 display mode while my gamecam renders only 800*480. Wen i change the gamecam's rendersolution to 1280*720 i don't get those artifacts but then the tiles are way too tiny.
Has anyone experienced this issue or knows how to fix that? :)
I had a similar problem with this, and found it was due to having too small a decimal value for the camera position.
I think what may be happening is some sort of rounding with certain tile columns/rows in the tilemap renderer.
I fixed this by rounding to a set accuracy, like so:
camera.position.x = Math.round(player.entity.getX() * scalePosition) / scalePosition;
Experiment with various values, but I got it working by using the tile size as the scalePosition value.
About tilesets, I posted a solution here: Getting gaps between tiled textures with libgdx
I've been using that method with Tiled itself. You will have to adjust "margin" and "spacing" when importing tilesets in Tiled to get the effect working.
It's 100% working for me :)
I want to use the Android orientation sensor data for my GLES camera - giving it the rotation matrix. I found a very good example here:
How to use onSensorChanged sensor data in combination with OpenGL
but this is only working with GL1.0 and I need to work on it for GLES2.0. Using my own shaders, everything works, moving the camera manuall is fine. But the moment I use the rotation matrix like in the example, it doesn't really work.
I generate the rotation matrix with:
SensorManager.getRotationMatrix(rotationMatrix, null, bufferedAccelGData, bufferedMagnetData);
My application is running in LANDSCAPe so I use that methode after (like in the example code):
float[] result = new float[16];
SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, result);
return result;
It worked fine on my phone in his code but not in mine. My screen looks like that:
The rotation matrix seems to be rotated 90° to the right (almost as if I have forgotten to switch to landscape for my activity).
I was thinking of using the remap() method in a wrong way but in the example it makes sense, the camera movement works now. If I rotate to the left, the screen rotates to the left as well, even though, since everything is turned, it rotates "up" (compared to the ground, which is not on the bottom but on the right). It just looks like I made a wall instead of a ground but I'm sure my coordinates are right for the vertices.
I took a look ath the draw method for the GLSurface and I don't see what I might have done wrong here:
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
MatrixStack.glLoadMatrix(sensorManager.getRotationMatrix()); // Schreibt die MVMatrix mit der ogn. Rotationsmatrix
GameRenderer.setPerspMatrix(); // Schreibt die Perspektivmatrix Uniform für GLES. Daran sollte es nicht liegen.
MatrixStack.mvPushMatrix();
drawGround();
MatrixStack.mvPopMatrix();
As I said, when moving my camera manually everything works perfect. So what is wrong with the rotation matrix I get?
Well, okay, it was a very old problem but now that I took a look at the code again I found the solution.
Having the phone in landscape I had to remap the axis using
SensorManager.remapCoordinateSystem(R, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, R);
But that still didn't rotate the image - even though the mapping of the Y and -X Axis worked fine. So simply using
Matrix.rotateM(R, 0, 90, 1, 0, 0);
Does the job. Not really nicely but it works.
I know it was a very old question and I don't see why I made this mistake but perhaps anyone else has the same problem one day.
Hope this helps,
Tobias
If it is (was) working on a specific phone but not on yours, I guess the android version may play a role here. We faced the issue in mixare Augmented Reality Engine where a Surfaceview is superimposed over a Camera view. Pleas consider that the information here may not apply to your case since we are not using OpenGL.
Modern version of android will return a default orientation, whereas previously portrait was the default. You can check how we query this in the Compatibility class. This information is then used to apply different values to the RemapCoordinateSystem call check lines 739 and onwards of this file.
Mixare is as well using landscape mode by default, so I guess our values for the remapping should apply to your case just as well. As I said earlier, we are using the 3x3 matrices, since we are not using OpenGL, but I guess this should be the same for OpenGL compatible matrices.
Take time, play with the orientation matrix, you will find a column that contains useful vaule.
Besides Log vales for each column, see which one is useful, try quaternions, keep playing with values never try the code directly in renderer, first check the values
Because later, you will have more options for input like touch, there too you have to test the values, play with them, use sensitivity constants with matrices too
I have been looking for the solution on the web for a long time. Most tutorials are fairly simple about adding shadow to a UIView. I also noticed that if we add a shadow to an UIImageView. The shadow shape could perfectly fit the shape of the content image if the image itself has alpha channel in it. Say for example, if the image is an animal with transparent background, the shadow shape is also the same as that animal (not a rectangle shadow as same as UIImageView frame).
But these are not enough. What I need to do is to add some changes to the shadow so it may have some rotation angle and compressed (squeezed or shift) effect so that looks like the sunlight comes from a certain spot.
To demonstrate what I need, I upload 2 images below, which I captured from the Google Map App created by Apple. You can imagine the Annotation Pin is an image which has the Pin shape, so the shadow is also "pin shaped", but it is not simply "offset" with a CGSize, you can see the top of the shadow is shifted right about 35 degrees and slightly squeezed the height.
When we tap and hold and pin, the shadow is also animated away from the pin, so I believe that such shadow can be made programmably.
The best shadow tutorial I can found so far is http://nachbaur.com/blog/fun-shadow-effects-using-custom-calayer-shadowpaths But unfortunately, that cannot make this effect.
If anyone know the answer or know any better words to search for, please let me know. Thank you.
(Please note that the shape of the image is dynamic in the App, so using any tool like Photoshop to pre-render the shadow is not an option.)
In order to create dynamic effects like this, you have to use Core Graphics. It's incredibly powerful once you know how to use it. Basically you need to set a skew transform on the context, set up a shadow and draw the image. You will probably have to use transparency layers as well.
It doesn't sound like you can use CALayer shadows, since that is meant to solve a specific use-case. The approach Apple takes with the pin marks on the map is to have two separate images that are created ahead of time (e.g. in Photoshop) and they position them within the map relative to a reference point.
If you really do need to do this at run-time, it should still be possible by using either Core Graphics or ImageKit. To get a blurred shadow appearance, you can use the kCICategoryBlur CIFilter. You can then convert the image to grayscale. And to get that compressed look you just need to resize and skew the image.
Once you have two separate images, you can either take the CGImageRef for the shadow image and can set that as the content of another sublayer, or you can add it as a separate view.
If you know what all the shapes are, you could just render a shadow image in Photoshop or something.
I would like to know how I can get the diameter (or radius) of an expanding circle animation at a at any point in time during the animation. I will end up stoping the animation right after I get the size as well, but figure I couldn't stop and remove it form the layer until I get the size of the circle.
For an example of how the expanding circle animation is implemented, it is a variation on the implementation shown in the addGrowingCircleAtPoint:(CGPoint)point method in the answer in the iPhone Quartz2D render expanding circle question.
I have tried to check various values on the layers, animation, etc but can't seem to find anything. I figure worse case I can attempt to make a best guess by taking the current time it is into its animation and use that to figure where it "should" be at based on its to and from size states. This seems like overkill for what I would assume is a value that is incrementing someplace I can just get easily.
Update:
I have tried several properties on the Presentation Layer including the Transform which never seems to change all the values are always the same regardless of what size the circle is at the time checked.
Okay here is how you get the current state of the an animation while it is animating.
While Rob was close he left out two pieces of key information.
First from the layer.presentationLayer.subLayers you have to get the layer you are animating on, which for me is the only sub layer available.
Second, from this sub layer you cannot just access the transform directly you have to do it by valueForKeyPath to get transform.scale.x. I used x because its a circle and x and y are the same.
I then use this to calculate the size of the circle at the time of the based on the values used to create the Arc.
I assume what you're trying to get to is the current CATransform3D, and that from that, you can get to your circle size.
What you want is the layer.presentationLayer.transform. See the CALayer docs for details on the presentationLayer. Also see the Core Animation Rendering Architecture.