XCode Coordinates for iPad Retina Displays - objective-c

I just noticed an interesting thing while attempting to update my app for the new iPad Retina display, every coordinate in Interface Builder is still based on the original 1024x768 resolution.
What I mean by this is that if I have a 2048x1536 image to have it fit the entire screen on the display I need to set it's size to 1024x768 and not 2048x1536.
I am just curious is this intentional? Can I switch the coordinate system in Interface Builder to be specific for Retina? It is a little annoying since some of my graphics are not exactly 2x in either width or height from their originals. I can't seem to set 1/2 coordinate numbers such as 1.5 it can either be 1 or 2 inside of Interface Builder.
Should I just do my interface design in code at this point and forget interface builder? Keep my graphics exactly 2x in both directions? Or just live with it?

The interface on iOS is based on points, not pixels. The images HAVE to be 2x the size of the originals.
Points Versus Pixels In iOS there is a distinction between the coordinates you specify in your drawing code and the pixels of the
underlying device. When using native drawing technologies such as
Quartz, UIKit, and Core Animation, you specify coordinate values using
a logical coordinate space, which measures distances in points. This
logical coordinate system is decoupled from the device coordinate
space used by the system frameworks to manage the pixels on the
screen. The system automatically maps points in the logical coordinate
space to pixels in the device coordinate space, but this mapping is
not always one-to-one. This behavior leads to an important fact that
you should always remember:
One point does not necessarily correspond to one pixel on the screen.
The purpose of using points (and the logical coordinate system) is to
provide a consistent size of output that is device independent. The
actual size of a point is irrelevant. The goal of points is to provide
a relatively consistent scale that you can use in your code to specify
the size and position of views and rendered content. How points are
actually mapped to pixels is a detail that is handled by the system
frameworks. For example, on a device with a high-resolution screen, a
line that is one point wide may actually result in a line that is two
pixels wide on the screen. The result is that if you draw the same
content on two similar devices, with only one of them having a
high-resolution screen, the content appears to be about the same size
on both devices.
In your own drawing code, you use points most of the time, but there
are times when you might need to know how points are mapped to pixels.
For example, on a high-resolution screen, you might want to use the
extra pixels to provide extra detail in your content, or you might
simply want to adjust the position or size of content in subtle ways.
In iOS 4 and later, the UIScreen, UIView, UIImage, and CALayer classes
expose a scale factor that tells you the relationship between points
and pixels for that particular object. Before iOS 4, this scale factor
was assumed to be 1.0, but in iOS 4 and later it may be either 1.0 or
2.0, depending on the resolution of the underlying device. In the future, other scale factors may also be possible.
From http://developer.apple.com/library/ios/#documentation/2DDrawing/Conceptual/DrawingPrintingiOS/GraphicsDrawingOverview/GraphicsDrawingOverview.html

This is intentional on Apple's part, to make your code relatively independent of the actual screen resolution when positioning controls and text. However, as you've noted, it can make displaying graphics at max resolution for the device a bit more complicated.
For iPhone, the screen is always 480 x 320 points. For iPad, it's 1024 x 768. If your graphics are properly scaled for the device, the impact is not difficult to deal with in code. I'm not a graphic designer, and it's proven a bit challenging to me to have to provide multiple sets of icons, launch images, etc. to account for hi-res.
Apple has naming standards for some image types that minimize the impact on your code:
https://developer.apple.com/library/ios/#DOCUMENTATION/UserExperience/Conceptual/MobileHIG/IconsImages/IconsImages.html
That doesn't help you when you're dealing with custom graphics inline, however.

Related

Large (in meters) landscape mesh has artifacts on peaks only at certain scale

I made a mesh from a Digital Elevation Map that spanned 1x1 degree box of geography, but when I scale the mesh up to 11139m in blender I get these visible jagged shadows on the peaks of the mesh. I'd prefer to not scale everything down but I suppose I can, it just seems like a strange issue I want to better understand.
My goal is to use the landscape in a WebVR application, but when I put this mesh into an Aframe scene it also has this issue. Thanks for any tips!
Quick answer:
I think this may be caused by the clipping start/end values. Also called near/far clipping planes. Adjusting them may fix the issue but also limit the rendering distance.
Longer explanation:
Take a look at this:
It's a simple grayscale, but imagine it is scaled across your entire scene depth (Z depth buffer). The range of this buffer is set by the start/stop clipping (near/far) camera setting.
By default Blender has its start/stop (near/far) clipping set to 0.01 - 1000.
While A-Frame has it like 0.005 - 10000. You may find more information here: A-Frame camera #properties
That means the renderer has to somehow fit every single point in that range somewhere on the grayscale. That may cause overlapping or Z-fighting because it is simply lacking precision to distinguish the details. And that is mainly visible at edges/peaks because the polygons are connected there at acute angles and the program has to round up the Z-values. That causes overlapping visible as darker shadows (most likely the backside of the polygon behind).
You may also want to read more about Z-fighting because it is somewhat related.
Example

Understanding points and the user space in Cocoa Drawing as they interact with screen resolution

Cocoa drawing sizes (widths & heights) are specified in points which are defined as follows in the OS X Cocoa Drawing Guide documentation:
"A single point is equivalent to 1/72 of an inch"
I understand from this that a point is a physical distance. So if my screen is 20 inches wide (for example) I would have 20 x 72 = 1440 points of horizontal width in points to work with. In other words, a point is independent of the resolution of the device.
This does not seem to be so...
A simple cocoa application using window width as a test shows that:
1) when my resolution is set to 1680x1050 it will take a width of 1680 points to span the width of the screen
2) similarly, if I change my resolution to 2560x1440 it will take a window width of 2560 points to span the width of the screen
Also confusing (in a contradictory way) is the statement made in the High Resolution Guidelines Apple Document that:
Each point in user space is backed by four pixels
The above tests seem to indicate that I have a user space of 1680x1050 when my display resolution is set to 1680x1050. If there are 4 pixels per user point then this would point to an effective "real" resolution of 2 times (1680x1050) = 3360x2100 which is more than the native resolution my 13 inch retina macbook pro of 2560x1600.
Points are an abstract, virtual coordinate system. The intent is that you usually design and write drawing code to work in points and that will be roughly consistent to human vision, compensating for different physical display pixel densities and the usual distance between the display and the user's eyes.
Points do not have a reliable relationship to either physical distance units (inches, centimeters, etc.) or physical display pixels.
For screen displays, there are at least three different measurements. For example, the screen of a Retina MacBook Pro has 2880x1800 physical pixels. In the default mode, that's mapped to 1440x900 points, so each point is a 2x2-pixel square. That's why a window on such a system has the same visual size as the same window on a non-Retina MacBook Pro with a screen with 1440x900 physical pixels mapped to 1440x900 points. The window is measured in points and so takes up the same portion of the screen real estate. However, on the Retina display, there are more pixels allowing for finer detail.
However, there is another layer of complexity possible. You can configure that Retina system to display more content on the screen at the cost of some of the detail. You can select a display mode of 1920x1200 points. In that mode, the rendering is done to a backbuffer of 3840x2400 pixels. That allows for rendering at a higher level of detail but keeps the math simple; points are still mapped to 2x2-pixel squares. (This simple math also avoids problems with seams when drawing abutting bitmap images.) But 3840x2400 is greater than the number of physical pixels in the display hardware. So, that backbuffer is scaled down when actually drawn on the screen to the physical 2880x1800 pixels. This loses some of the higher detail from the backbuffer, but the results are still finer-detailed than either a physical 1920x1200 screen or scaling up a 1920x1200 rendering to the physical 2880x1800 screen.
So, for this configuration:
Screen size in points: 1920x1200
Backbuffer in in-memory pixels: 3840x2400
Physical pixels in display hardware: 2880x1800
Other configurations are, of course, possible:
Screen size in points: 2880x1800
Backbuffer in pixels: 2880x1800
Physical pixels: 2880x1800
Everything will be teeny-tiny but you'll be able to fit a lot of stuff (e.g. many lines of text) on the screen.
Screen size in points: 1280x800
Backbuffer in pixels: 2560x1600
Physical pixels: 2880x1800
This will actually make everything (text, buttons, etc.) appear larger since there are fewer points mapped to the same physical pixels. Each point will be physically larger. Note, though, that each point still maps to a 2x2-pixel square in the backbuffer. As before, the backbuffer is scaled by the hardware to the physical display. This time it's scaled up slightly rather than down. (This scaling is the same thing as happens on a non-Retina LCD display when you select a mode with fewer pixels than the physical display. Obviously, an LCD can't change the number of physical pixels it has, so the different resolution is accomplished by scaling a backbuffer.)
Etc.

How to size my UI components for a Cocoa mac app given the potential variety of resolutions it could be displayed on?

Cocoa uses a drawing system (user coordinate space) measured in "points" which are resolution independent...sounds great
While we need to be concerned with our app running in many resolutions, Cocoa is going to take care of that for us in (1) above...sounds too good to be true!
It does scale our controls as resolution changes...this is good.
BUT the screen size increases as my resolution increases...this is not good, I though we had a drawing canvas that was independent of the resolution!
What if the controls shrink to silly small levels as the resolution increases - should I be concerned about this?
To summarize: is their a "standard" resolution I should design for and then all automatic scaling by Apple will automatically look fine?
[Confused while reading the Apple Progammer Guide on the topic of Drawing]
You do not need to be concerned about this. The user is only allowed to select resolutions which make sense given the physical size of the display, so the standard controls will always be "large enough". You just need to test your app on Retina and non-Retina displays (and ideally both at the same time, with an external 1x monitor plugged on a 2x machine ; move your windows between the two screens and check that your images update accordingly).

ipad frame max size is not enough

I'm developing an ipad application about 2d drawing.
I need a UIView.frame size of 4000x4000. But if I set a frame with size 4000x4000 the application
crash since i get memory warning.
Right night I'm using 1600*1000 frame size and the user can add new object (rectangle) on frame. User can also translate fram along x and y axis using pan gesture in order to see or add new object.
Have you got some suggestion? how can I tackle this problem?
thanks
Well, I would suggest what is used in video games for a long time - creating a tiled LOD mechanism, where only when you zoom in toward specific tiles, they are rendered at an increasing resolution, while when zoomed out, you only render lower resolution.
If the drawing in based on shapes (rectangles, points, lines, or anything can be represented by simple vector data) there is no reason to create a UIView for the entire size of the drawing. You just redraw the currently visible view as the user pans across the drawing using the stored vector data. There is no persistent bitmapped representation of the drawing.
If using bitmap data for drawing (i.e. a Photoshop type of app) then you'll likely need to use a mechanism that caches off-screen data into secondary storage and loads it back onto the screen as the user pans across it. In either case, the UIView only needs to be as big as the physical screen size.
Sorry I don't have any iOS code examples for any of this - take this as a high-level abstraction and work from there.
Sounds like you want to be using UIScrollView.

How to warp an UIImage using Open GL or any other method...?

I am trying to develop an iOS app to make any given image (UIImage) warp on selected locations.
So for this task to be accomplished what should be the rightmost way going forward, for now i'm doing some research on doing this on OpenGL (frankly any heads up on the framework would be nice too).
So finally the requirement is to get the UIImage warp on some given locations. (If x, y coordinates are there)
If you're sufficiently familiar with (or willing to learn) OpenGL, then you could do this:
Create a flat, rectangular grid of points to be a mesh that will be displayed with OpenGL.
Apply the image to the mesh as a texture.
When distorting the image at a particular location, you can just decide which points on the mesh will be affected by the distortion, and move them.
You can push points out from the center, or in toward a center, or shift them all in the same direction. If the distortion affects a large area, then you change a lot of points (possibly changing those in the center by more than those near the edges of the affected area).
Not sure what you mean by 'warp'. Do you mean skew it in 3 dimensions? If so you can adjust the CGAffineTransform for the UIImageView you are displaying it in to get that effect.
If you mean some kind of image processing warp, and you are using iOS 5, you can use Core Image for that.