Cocos2d how to get object position from tile map depending on retina or not? - objective-c

Hey guys I have a object sitting in my tile map for spawn point reference, problem is the -hd version is twice as big as the non -hd version, so going:
(width = width of character getting spawned)
int spawnX = (width/2) + [tilemap spawnX];
Get's the wrong position when in hd mode, because the tile map is in pixels but cocos2d is in points.
I.E I could test if retina display was supported, but from what I hear thats a bit dodgey.
How can you do this?

Retina display is supported properly on cocos2d v2.0 rc2.
First, make sure to call [director_ enableRetinaDisplay:YES] in your app launch with all the other cocos2d initialization stuff.
Then, use CC_CONTENT_SCALE_FACTOR() * pointCount to get pixels out of it.
There are also other convenience macros defined in the same header as the CC_CONTENT_SCALE_FACTOR() macro to help you convert CGRects, etc. that are in points to pixels, and vice-versa.

Related

A camera zoom issue with libGDX

Hey guys I'm working on a android game with libGDX, and I got a zoom problem here. It's basically a ski safari style 2D game, and I want to implement a zoom in/out effect as the height changing. Is that possible to work this out with OrthoGraphicCamera? Or shall I change the size of the objects in real time(because I still wanna keep the background in a fixed size)?
if Camera.zoom not solve your problem, you can use
batch unmodified to draw your backgroud, and use batch.getProjectionMatrix().cpy().scale (yourScaleVariableX, yourScaleVariableY, 0);
to simulate only the items that you want, varying varibles, not if you're looking hopefully help.
simple example:
variable Class
Matrix4 testMatrix;
float yourScaleVariableX;
float yourScaleVariableY;
example render method
.//
batch.begin();
yourBackground.draw...
batch.end();
batch.begin();
testMatrix = batch.getProjectionMatrix().cpy().scale (yourScaleVariable, yourScaleVariable, 0);
batch.setProjectionMatrix(testMatrix);
yourItem.draw..
batch.end();
I think it is more efficient, changing the matrix, which resize all objects.
I hope to explain well.
Edit
I did not realize when I wrote the answer, you can store save the original matrix, before editing it for later use, or use batch.setProjectionMatrix (camera.combined); to restore

libgdx tiledmap flicker with Nearest filtering

I am having strange artifacts on a tiledmap while scrolling with the camera clamped on the player (who is a box2d-Body).
Before getting this issue i used the linear filter for the tiledmap which prevents those strange artifacts from happening but results in Texture bleeding (i loaded the tiledmap straight from a .tmx file without padding the tiles).
However now i am using the Nearest filter instead which gets rid of the bleeding but when scrolling the map (by walking the character with the cam clamped on him) it seams like a lot of pixel are flickering around. The flickering results can get better or worse depending on the cameras zoom value.
But when I use the "OrthoCamController" class from the libgdx-Utilities which allows to scroll the map by panning with the mouse/finger i don't get these artifacts at all.
I assume that the flickering might be caused by bad camera-position values received by the box2d-Body's position.
One more thing i should add here: The game instance runs in 1280*720 display mode while my gamecam renders only 800*480. Wen i change the gamecam's rendersolution to 1280*720 i don't get those artifacts but then the tiles are way too tiny.
Has anyone experienced this issue or knows how to fix that? :)
I had a similar problem with this, and found it was due to having too small a decimal value for the camera position.
I think what may be happening is some sort of rounding with certain tile columns/rows in the tilemap renderer.
I fixed this by rounding to a set accuracy, like so:
camera.position.x = Math.round(player.entity.getX() * scalePosition) / scalePosition;
Experiment with various values, but I got it working by using the tile size as the scalePosition value.
About tilesets, I posted a solution here: Getting gaps between tiled textures with libgdx
I've been using that method with Tiled itself. You will have to adjust "margin" and "spacing" when importing tilesets in Tiled to get the effect working.
It's 100% working for me :)

Why are the maximum X and Y touch coordinates on the Surface Pro is different from native display resolution?

I have noticed that the Surface Pro and I believe the Sony Vaio Duo 11 are reporting maximum touch coordinates of 1366x768, which is surprising to me since their native display resolution is 1920x1080.
Does anyone know of a way to find out at runtime what the maximum touch coordinates are? I'm running a DirectX app underneath the XAML, so I have to scale the touch coordinates into my own world coordinates and I cannot do this without knowing what the scale factor is.
Here is the code that I'm running that looks at the touch coordinates:
From DirectXPage.xaml
<Grid PointerPressed="OnPointerPressed"></Grid>
From DirectXPage.xaml.cpp
void DirectXPage::OnPointerPressed(Platform::Object^ sender, Windows::UI::Xaml::Input::PointerRoutedEventArgs^ args)
{
auto pointerPoint = args->GetCurrentPoint(nullptr);
// the x value ranges between 0 and 1366
auto x = pointerPoint->Position.X;
// the y value ranges between 0 and 768
auto y = pointerPoint->Position.Y;
}
Also, here is a sample project setup that can demonstrate this issue if run on a Surface Pro:
http://andrewgarrison.com/files/TouchTester.zip
Everything on XAML side is measured in device independent pixels. Ideally you should never have to worry about actual physical pixels and let winrt do its magic in the background.
If for some season you do need to find you current scale factor you can use DisplayProperties.ResolutionScale and use it to convert DIPs into screen pixels.
their native display resolution is 1920x1080
That makes the display fit the HD Tablet profile, everything is automatically scaled by 140%. With of course the opposite un-scaling occurring for any reported touch positions. You should never get a position beyond 1371,771. This ensures that any Store app works on any device, regardless of the quality of its display and without the application code having to help, beyond providing bitmaps that still look sharp when the app is rescaled to 140 and 180%. You should therefore not do anything at all. It is unclear what problem you are trying to fix.
An excellent article that describes the automatic scaling feature is here.

GLKit Rendering and iOS Device Orientation (Face Up / Down)

I have an app with some projection matrix set-up code based on Xcode 4.5.2's OpenGL Game template. In the update function I set appropriate z-translation values for baseModelViewMatrix by querying [[UIDevice currentDevice] userInterfaceIdiom] as well as UIDeviceOrientationIsLandscape: and UIDeviceOrientationIsPortrait:. This effectively lets me set the scale of the area rendered on screen on a per-orientation basis for each device. I also call update from willAnimateRotationToInterfaceOrientation:duration: to maintain the correct rendering proportions for each orientation of the device during runtime.
This all works fine, however I've noticed that when the device is oriented either face-up or face-down my scene is not displayed, and I only see what appears to be an empty GLKView. Rotating the device to any orientation perpendicular to the ground plane restores the scene to its expected behavior. I tried checking UIDeviceOrientationIsValidInterfaceOrientation:, which seems like it should handle what I need, but did not see any difference in behavior.
My guess is that GLKit does some automatic updating of the GLKView when a change in orientation is detected, but I didn't find any clear answers on what might be causing this particular behavior. Any thoughts on what's going on? Thanks in advance.
If you are using a function like GLKMatrix4MakeLookAt, you need to make sure your look direction is not parallel with the up direction. In the case of looking straight up or down, you'll need to adjust the camera's "up" vector to another value such as 0,0,-1 or 0,0,1.

Sensor Orientiation -> GLRotation doesn't work properly

I want to use the Android orientation sensor data for my GLES camera - giving it the rotation matrix. I found a very good example here:
How to use onSensorChanged sensor data in combination with OpenGL
but this is only working with GL1.0 and I need to work on it for GLES2.0. Using my own shaders, everything works, moving the camera manuall is fine. But the moment I use the rotation matrix like in the example, it doesn't really work.
I generate the rotation matrix with:
SensorManager.getRotationMatrix(rotationMatrix, null, bufferedAccelGData, bufferedMagnetData);
My application is running in LANDSCAPe so I use that methode after (like in the example code):
float[] result = new float[16];
SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, result);
return result;
It worked fine on my phone in his code but not in mine. My screen looks like that:
The rotation matrix seems to be rotated 90° to the right (almost as if I have forgotten to switch to landscape for my activity).
I was thinking of using the remap() method in a wrong way but in the example it makes sense, the camera movement works now. If I rotate to the left, the screen rotates to the left as well, even though, since everything is turned, it rotates "up" (compared to the ground, which is not on the bottom but on the right). It just looks like I made a wall instead of a ground but I'm sure my coordinates are right for the vertices.
I took a look ath the draw method for the GLSurface and I don't see what I might have done wrong here:
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
MatrixStack.glLoadMatrix(sensorManager.getRotationMatrix()); // Schreibt die MVMatrix mit der ogn. Rotationsmatrix
GameRenderer.setPerspMatrix(); // Schreibt die Perspektivmatrix Uniform für GLES. Daran sollte es nicht liegen.
MatrixStack.mvPushMatrix();
drawGround();
MatrixStack.mvPopMatrix();
As I said, when moving my camera manually everything works perfect. So what is wrong with the rotation matrix I get?
Well, okay, it was a very old problem but now that I took a look at the code again I found the solution.
Having the phone in landscape I had to remap the axis using
SensorManager.remapCoordinateSystem(R, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, R);
But that still didn't rotate the image - even though the mapping of the Y and -X Axis worked fine. So simply using
Matrix.rotateM(R, 0, 90, 1, 0, 0);
Does the job. Not really nicely but it works.
I know it was a very old question and I don't see why I made this mistake but perhaps anyone else has the same problem one day.
Hope this helps,
Tobias
If it is (was) working on a specific phone but not on yours, I guess the android version may play a role here. We faced the issue in mixare Augmented Reality Engine where a Surfaceview is superimposed over a Camera view. Pleas consider that the information here may not apply to your case since we are not using OpenGL.
Modern version of android will return a default orientation, whereas previously portrait was the default. You can check how we query this in the Compatibility class. This information is then used to apply different values to the RemapCoordinateSystem call check lines 739 and onwards of this file.
Mixare is as well using landscape mode by default, so I guess our values for the remapping should apply to your case just as well. As I said earlier, we are using the 3x3 matrices, since we are not using OpenGL, but I guess this should be the same for OpenGL compatible matrices.
Take time, play with the orientation matrix, you will find a column that contains useful vaule.
Besides Log vales for each column, see which one is useful, try quaternions, keep playing with values never try the code directly in renderer, first check the values
Because later, you will have more options for input like touch, there too you have to test the values, play with them, use sensitivity constants with matrices too