Cocos2D: Updating positions for retina - objective-c

I've been working with a modified version of Cocos2D 0.99.5. Nothing has changed in this modified version as far as positions go, but when I enabled retina, the tmx maps display fine, but detected tiles, most likely using tileGIDAt and positions with ccp, as well as positioning sprites are way off. This is a known situation that I've done some research on, but don't know the easiest way to overcome it. I hope to edit just a few things in Cocos2D (using points instead of pixels when using retina) to solve this, but I haven't seen anything online that mentions this.
I saw some code divided an object's position by CC_CONTENT_SCALE_FACTOR
CGPoint objectPosition = [tmxLayer positionAt:objectTile];
if (CC_CONTENT_SCALE_FACTOR() == 2){
objectPosition.x /= CC_CONTENT_SCALE_FACTOR();
objectPosition.y /= CC_CONTENT_SCALE_FACTOR();
}
After checking out some methods in Cocos2D I really don't know where to use this. So what exact updates do I need to do and where do I need to put them?

I ran into the same problem, and here is what I found.
The problem has to do with points versus pixels and how Cocos2d handles them, which you alluded to in your question. As you know, a point on a non-retina display is the same as a retina display. The iPhone 3GS, which is non-retina, has a resolution of 320 x 480, and the center point of that screen is 160 x 240. The iPhone 4, which is retina, has a resolution of 640 x 960, but the center "point" of that screen is still 160 x 240.
Let us assume that your tmx map is made up of tiles that are 32 x 32 pixels. Let us further assume that you want to check a tile that your "hero" sprite is currently at. Finally, let us assume that your hero sprite's position is 192 x 288. To get the tile coordinate you would logically take the position of your sprite and divide both the x and y positions by your tile size of 32 (I am leaving out the Y coordinate flipping stuff). Rather than hard coding the value of 32, I assume you are getting this value by using something like the following code, where tileMap is your already loaded map:
tileMap.tileSize.width
So based on the 192 x 288 position, your hero is at tile 6 x 9 within your map. The problem is that on a retina display the 192 x 288 position is based on points, but your 32 x 32 tile is based on pixels. On the retina display, 32 x 32 pixels is really 16 x 16 in points. So in actuality, your hero sprite is not at tile 6 x 9 but rather at tile 12 x 18.
As such, an easy way to fix this is to check for a retina display, and if one exists then when trying to determine a specific tile coordinate you should divide the width and height of your tile by 2 to convert it into points.
This worked great for me, and I hope it helps you as well.

Related

Y axis depth sorting in SDL2

I've been using SDL for my game but rendering order as always been a problem. When you draw something in SDL, it gets drawn over what was drawn before it. Which means, if I draw a character at 10 in the Y coordinate, and I draw a character at 5 in the Y coordinate, the one at 5 appears above the one at 10, even though it should be the other way around.
To demonstrate the problem.
This is what it should be like. The one with lower Y axis value should appear behind no matter which one I draw first.
In 3D, things don't need to be rendered in a specific order, things are drawn behind other things according to the Z buffer. I want SDL to behave the same way, but on the Y axis. How?

ViewController (Width and Height) scale

I want to set a screen size for my view (Making it for iPhone 6). Problem is, I don't know if the input scale in point or pixel
Is it 600 pixel or 600 point?
Thank
It is in point. In retina devices, 1 point equals two pixels (or 1 point equals three pixels for #3x supported device). In non-retina devices, 1 points equals 1 pixel.
To answer your question, these are points, not pixels.
I am not sure why you want to set a fixed size only for iPhone but I think you might be interested in checking out some Auto Layout tutorials like this one. It will help you build interfaces for multiple devices at a time !
Like KDeogharkar said, there are different factor between points and pixel depending on the device. Usually you don't want to work with pixels.

Scaling for Different iPhone Screens in Sprite Kit

I am creating an iPhone game in sprite kit. After weeks of research, I am still having trouble understanding how to properly size and implement sprites for each screen size.
I understand that these suffixes determine which image to use (depending on the aspect ratio of the screen)
#2x - 4s,5,6
#3x - 6+
I have read and tooled with different scaling modes in my view controller but had no luck and difficulty understanding them.
If I provide a background of 750x1136 (pixels) as the #2x, it will perfectly fit the iphone 6 but will be too big for the iphone 5. If scaling is the answer, how would "sprite kit" know I provided an image for the iphone 5 that I want scaled up for the 6, or vice versa? Is this a build setting? Same for characters, I need iphone 6 sprites to be proportionally bigger than the iphone 5 sprites.
How would I most appropriately size and scale sprites for the different devices? (easier to discuss in terms of the backgrounds that should be the exact size of the screen)
I am expecting to create one set of sprites for each aspect ratio using the resolution of the biggest screen size. Ex. #2x designed for iPhone 6 and scaled down for the 5 and 4s.
The 3x, 2x and normal images are not really intended to be manipulated that way. The three images should be essentially the same image with the 3x having exactly 3 times the pixel dimensions of the normal, the 2x having double dimensions etc.
If you need to scale the scene to better fit the format of a particular device, you may need to scale that when you create the scene, the way Apple sample code does:
var viewSize = self.view.bounds.size
// On iPhone/iPod touch we want to see a similar amount of the scene as on iPad.
// So, we set the size of the scene to be double the size of the view, which is
// the whole screen, 3.5- or 4- inch. This effectively scales the scene to 50%.
if UIDevice.currentDevice().userInterfaceIdiom == .Phone {
viewSize.height *= 2
viewSize.width *= 2
}

Why are the maximum X and Y touch coordinates on the Surface Pro is different from native display resolution?

I have noticed that the Surface Pro and I believe the Sony Vaio Duo 11 are reporting maximum touch coordinates of 1366x768, which is surprising to me since their native display resolution is 1920x1080.
Does anyone know of a way to find out at runtime what the maximum touch coordinates are? I'm running a DirectX app underneath the XAML, so I have to scale the touch coordinates into my own world coordinates and I cannot do this without knowing what the scale factor is.
Here is the code that I'm running that looks at the touch coordinates:
From DirectXPage.xaml
<Grid PointerPressed="OnPointerPressed"></Grid>
From DirectXPage.xaml.cpp
void DirectXPage::OnPointerPressed(Platform::Object^ sender, Windows::UI::Xaml::Input::PointerRoutedEventArgs^ args)
{
auto pointerPoint = args->GetCurrentPoint(nullptr);
// the x value ranges between 0 and 1366
auto x = pointerPoint->Position.X;
// the y value ranges between 0 and 768
auto y = pointerPoint->Position.Y;
}
Also, here is a sample project setup that can demonstrate this issue if run on a Surface Pro:
http://andrewgarrison.com/files/TouchTester.zip
Everything on XAML side is measured in device independent pixels. Ideally you should never have to worry about actual physical pixels and let winrt do its magic in the background.
If for some season you do need to find you current scale factor you can use DisplayProperties.ResolutionScale and use it to convert DIPs into screen pixels.
their native display resolution is 1920x1080
That makes the display fit the HD Tablet profile, everything is automatically scaled by 140%. With of course the opposite un-scaling occurring for any reported touch positions. You should never get a position beyond 1371,771. This ensures that any Store app works on any device, regardless of the quality of its display and without the application code having to help, beyond providing bitmaps that still look sharp when the app is rescaled to 140 and 180%. You should therefore not do anything at all. It is unclear what problem you are trying to fix.
An excellent article that describes the automatic scaling feature is here.

Three.js camera rotation is weird

I've just started with Three.js. Like really just now.
After playing with it for an hour or so and building a tool that helps me understand how the different elements work together (Camera, Light, Objects), I found something strange.
The tool: http://hotblocks.nl/tests/three/cubes.html
This is the current default set up:
the Camera is positioned 210 upwards and
500 backwards and
246 to the right
the Camera is rotated slightly to the left
the light is directly above and shines in all directions
As you can see, the objects are at the very bottom of the viewport. So I want to turn the Camera downward, so I can see more of them.
Try that: turn camera.rotation.x down.
That works, but the angle of rotation is wrong! Instead of the Camera rotating, it's the World rotating around its Z axis.
That's not right, is it?
The Y axis is also wrong. It rotates the World around its Y axis.
Rotating the Camera around its Z axis, works perfectly: the Camera rotates, not the World.
Am I doing it wrong? Or understanding it wrong?
PS Since the Camera rotation is only around its Y axis, the objects' vertical edges should be vertical in the result as well. In the default set up, they are. Rotating the camera around its X axis, shouldn't change that, but it does. Only rotating around its Z axis should change that (and it does). Am I wrong?
PPS I know about Camera.lookAt( THREE.Vector3 target ), but that changes the rotation of the camera, including its Z axis, and that shouldn't be necessary, logically.
Answer received on Github: https://github.com/mrdoob/three.js/issues/1163