Understanding points and the user space in Cocoa Drawing as they interact with screen resolution - objective-c

Cocoa drawing sizes (widths & heights) are specified in points which are defined as follows in the OS X Cocoa Drawing Guide documentation:
"A single point is equivalent to 1/72 of an inch"
I understand from this that a point is a physical distance. So if my screen is 20 inches wide (for example) I would have 20 x 72 = 1440 points of horizontal width in points to work with. In other words, a point is independent of the resolution of the device.
This does not seem to be so...
A simple cocoa application using window width as a test shows that:
1) when my resolution is set to 1680x1050 it will take a width of 1680 points to span the width of the screen
2) similarly, if I change my resolution to 2560x1440 it will take a window width of 2560 points to span the width of the screen
Also confusing (in a contradictory way) is the statement made in the High Resolution Guidelines Apple Document that:
Each point in user space is backed by four pixels
The above tests seem to indicate that I have a user space of 1680x1050 when my display resolution is set to 1680x1050. If there are 4 pixels per user point then this would point to an effective "real" resolution of 2 times (1680x1050) = 3360x2100 which is more than the native resolution my 13 inch retina macbook pro of 2560x1600.

Points are an abstract, virtual coordinate system. The intent is that you usually design and write drawing code to work in points and that will be roughly consistent to human vision, compensating for different physical display pixel densities and the usual distance between the display and the user's eyes.
Points do not have a reliable relationship to either physical distance units (inches, centimeters, etc.) or physical display pixels.
For screen displays, there are at least three different measurements. For example, the screen of a Retina MacBook Pro has 2880x1800 physical pixels. In the default mode, that's mapped to 1440x900 points, so each point is a 2x2-pixel square. That's why a window on such a system has the same visual size as the same window on a non-Retina MacBook Pro with a screen with 1440x900 physical pixels mapped to 1440x900 points. The window is measured in points and so takes up the same portion of the screen real estate. However, on the Retina display, there are more pixels allowing for finer detail.
However, there is another layer of complexity possible. You can configure that Retina system to display more content on the screen at the cost of some of the detail. You can select a display mode of 1920x1200 points. In that mode, the rendering is done to a backbuffer of 3840x2400 pixels. That allows for rendering at a higher level of detail but keeps the math simple; points are still mapped to 2x2-pixel squares. (This simple math also avoids problems with seams when drawing abutting bitmap images.) But 3840x2400 is greater than the number of physical pixels in the display hardware. So, that backbuffer is scaled down when actually drawn on the screen to the physical 2880x1800 pixels. This loses some of the higher detail from the backbuffer, but the results are still finer-detailed than either a physical 1920x1200 screen or scaling up a 1920x1200 rendering to the physical 2880x1800 screen.
So, for this configuration:
Screen size in points: 1920x1200
Backbuffer in in-memory pixels: 3840x2400
Physical pixels in display hardware: 2880x1800
Other configurations are, of course, possible:
Screen size in points: 2880x1800
Backbuffer in pixels: 2880x1800
Physical pixels: 2880x1800
Everything will be teeny-tiny but you'll be able to fit a lot of stuff (e.g. many lines of text) on the screen.
Screen size in points: 1280x800
Backbuffer in pixels: 2560x1600
Physical pixels: 2880x1800
This will actually make everything (text, buttons, etc.) appear larger since there are fewer points mapped to the same physical pixels. Each point will be physically larger. Note, though, that each point still maps to a 2x2-pixel square in the backbuffer. As before, the backbuffer is scaled by the hardware to the physical display. This time it's scaled up slightly rather than down. (This scaling is the same thing as happens on a non-Retina LCD display when you select a mode with fewer pixels than the physical display. Obviously, an LCD can't change the number of physical pixels it has, so the different resolution is accomplished by scaling a backbuffer.)
Etc.

Related

ViewController (Width and Height) scale

I want to set a screen size for my view (Making it for iPhone 6). Problem is, I don't know if the input scale in point or pixel
Is it 600 pixel or 600 point?
Thank
It is in point. In retina devices, 1 point equals two pixels (or 1 point equals three pixels for #3x supported device). In non-retina devices, 1 points equals 1 pixel.
To answer your question, these are points, not pixels.
I am not sure why you want to set a fixed size only for iPhone but I think you might be interested in checking out some Auto Layout tutorials like this one. It will help you build interfaces for multiple devices at a time !
Like KDeogharkar said, there are different factor between points and pixel depending on the device. Usually you don't want to work with pixels.

How to design Metro UIs with fonts that look good on any resolution?

When one looks at the Guidelines for fonts, we see that fonts are specified in points. A point is 1/72 of an inch so it is an absolute measure: a 10 points character should show at the exact same absolute size on any monitor at any resolution. That would make sense to me as I want to be able to read text -at the same size- whether on a 10 in tablet or a 23 in monitor. In other words, I want my text to be readable on a tablet, but I do not want it to be too big on a monitor.
On the other hand, I can understand that some UI elements could be specified in pixels, as in the Page layout guidelines.
However, in XAML font size is specified in pixels which is device dependent (to my understanding). Hence the fonts size will look tiny on a monitor with a higher resolution! See this post for more details. The answer in that post says "this way, you are getting a consistent font size". I can't see how I am getting a consistent size when it changes when the resolution changes?!?
Should I load different font size programmatically depending on the resolution of the device?
I see here that Windows does some scaling adjustment depending on the DPI. Will this adjustment be enough for my users to have a great experience on a tablet and on, say, a 20 inch monitor (or should I programmatically change the font size depending on the device resolution)?
Bonus question: WHY are the Font Guidelines written using points when the software tools do not use points (like, what were they thinking)?
"What were they thinking" is extensively covered in this blog post.
You'll also see described how scaling for pixel density is automatic:
For those who buy these higher pixel-density screens, we want to ensure that their apps, text, and images will look both beautiful and usable on these devices. Early on, we explored continuous scaling to the pixel density, which would maintain the size of an object in inches, but we found that most apps use bitmap images, which could look blurry when scaled up or down to an unpredictable size. Instead, Windows 8 uses predictable scale percentages to ensure that Windows will look great on these devices. There are three scale percentages in Windows 8:
100% when no scaling is applied
140% for HD tablets
180% for quad-XGA tablets
The percentages are optimized for real devices in the ecosystem. 140% and 180% may seem like odd scale percentage choices, but they make sense when you think about how this will work on real hardware.
For example, the 140% scale is optimized for 1920x1080 HD tablets, which has 140% of the baseline tablet resolution of 1366x768. These optimized scale factors maintain consistent layouts between the baseline tablet and the HD tablet, because the effective resolution is the same on the two devices. Each scale percentage was chosen to ensure that a layout that was designed for 100% 1366x768 tablets has content that is the same physical size and the same layout on 140% HD tablets or 180% quad-XGA tablets.

Why are the maximum X and Y touch coordinates on the Surface Pro is different from native display resolution?

I have noticed that the Surface Pro and I believe the Sony Vaio Duo 11 are reporting maximum touch coordinates of 1366x768, which is surprising to me since their native display resolution is 1920x1080.
Does anyone know of a way to find out at runtime what the maximum touch coordinates are? I'm running a DirectX app underneath the XAML, so I have to scale the touch coordinates into my own world coordinates and I cannot do this without knowing what the scale factor is.
Here is the code that I'm running that looks at the touch coordinates:
From DirectXPage.xaml
<Grid PointerPressed="OnPointerPressed"></Grid>
From DirectXPage.xaml.cpp
void DirectXPage::OnPointerPressed(Platform::Object^ sender, Windows::UI::Xaml::Input::PointerRoutedEventArgs^ args)
{
auto pointerPoint = args->GetCurrentPoint(nullptr);
// the x value ranges between 0 and 1366
auto x = pointerPoint->Position.X;
// the y value ranges between 0 and 768
auto y = pointerPoint->Position.Y;
}
Also, here is a sample project setup that can demonstrate this issue if run on a Surface Pro:
http://andrewgarrison.com/files/TouchTester.zip
Everything on XAML side is measured in device independent pixels. Ideally you should never have to worry about actual physical pixels and let winrt do its magic in the background.
If for some season you do need to find you current scale factor you can use DisplayProperties.ResolutionScale and use it to convert DIPs into screen pixels.
their native display resolution is 1920x1080
That makes the display fit the HD Tablet profile, everything is automatically scaled by 140%. With of course the opposite un-scaling occurring for any reported touch positions. You should never get a position beyond 1371,771. This ensures that any Store app works on any device, regardless of the quality of its display and without the application code having to help, beyond providing bitmaps that still look sharp when the app is rescaled to 140 and 180%. You should therefore not do anything at all. It is unclear what problem you are trying to fix.
An excellent article that describes the automatic scaling feature is here.

XCode Coordinates for iPad Retina Displays

I just noticed an interesting thing while attempting to update my app for the new iPad Retina display, every coordinate in Interface Builder is still based on the original 1024x768 resolution.
What I mean by this is that if I have a 2048x1536 image to have it fit the entire screen on the display I need to set it's size to 1024x768 and not 2048x1536.
I am just curious is this intentional? Can I switch the coordinate system in Interface Builder to be specific for Retina? It is a little annoying since some of my graphics are not exactly 2x in either width or height from their originals. I can't seem to set 1/2 coordinate numbers such as 1.5 it can either be 1 or 2 inside of Interface Builder.
Should I just do my interface design in code at this point and forget interface builder? Keep my graphics exactly 2x in both directions? Or just live with it?
The interface on iOS is based on points, not pixels. The images HAVE to be 2x the size of the originals.
Points Versus Pixels In iOS there is a distinction between the coordinates you specify in your drawing code and the pixels of the
underlying device. When using native drawing technologies such as
Quartz, UIKit, and Core Animation, you specify coordinate values using
a logical coordinate space, which measures distances in points. This
logical coordinate system is decoupled from the device coordinate
space used by the system frameworks to manage the pixels on the
screen. The system automatically maps points in the logical coordinate
space to pixels in the device coordinate space, but this mapping is
not always one-to-one. This behavior leads to an important fact that
you should always remember:
One point does not necessarily correspond to one pixel on the screen.
The purpose of using points (and the logical coordinate system) is to
provide a consistent size of output that is device independent. The
actual size of a point is irrelevant. The goal of points is to provide
a relatively consistent scale that you can use in your code to specify
the size and position of views and rendered content. How points are
actually mapped to pixels is a detail that is handled by the system
frameworks. For example, on a device with a high-resolution screen, a
line that is one point wide may actually result in a line that is two
pixels wide on the screen. The result is that if you draw the same
content on two similar devices, with only one of them having a
high-resolution screen, the content appears to be about the same size
on both devices.
In your own drawing code, you use points most of the time, but there
are times when you might need to know how points are mapped to pixels.
For example, on a high-resolution screen, you might want to use the
extra pixels to provide extra detail in your content, or you might
simply want to adjust the position or size of content in subtle ways.
In iOS 4 and later, the UIScreen, UIView, UIImage, and CALayer classes
expose a scale factor that tells you the relationship between points
and pixels for that particular object. Before iOS 4, this scale factor
was assumed to be 1.0, but in iOS 4 and later it may be either 1.0 or
2.0, depending on the resolution of the underlying device. In the future, other scale factors may also be possible.
From http://developer.apple.com/library/ios/#documentation/2DDrawing/Conceptual/DrawingPrintingiOS/GraphicsDrawingOverview/GraphicsDrawingOverview.html
This is intentional on Apple's part, to make your code relatively independent of the actual screen resolution when positioning controls and text. However, as you've noted, it can make displaying graphics at max resolution for the device a bit more complicated.
For iPhone, the screen is always 480 x 320 points. For iPad, it's 1024 x 768. If your graphics are properly scaled for the device, the impact is not difficult to deal with in code. I'm not a graphic designer, and it's proven a bit challenging to me to have to provide multiple sets of icons, launch images, etc. to account for hi-res.
Apple has naming standards for some image types that minimize the impact on your code:
https://developer.apple.com/library/ios/#DOCUMENTATION/UserExperience/Conceptual/MobileHIG/IconsImages/IconsImages.html
That doesn't help you when you're dealing with custom graphics inline, however.

How to calibrate a camera and a robot

I have a robot and a camera. The robot is just a 3D printer where I changed the extruder for a tool, so it doesn't print but it moves every axis independently. The bed is transparent, and below the bed there is a camera, the camera never moves. It is just a normal webcam (playstation eye).
I want to calibrate the robot and the camera, so that when I click on a pixel on a image provided by the camera, the robot will go there. I know I can measure the translation and the rotation between the two frames, but that will probably return lots of errors.
So that's my question, how can I relate the camera and a robot. The camera is already calibrated using chessboards.
In order to make everything easier, the Z-axis can be ignored. So the calibration will be over X and Y.
It depends of what error is acceptable for you.
We have similar setup where we have camera which looks at some plane with object on it that can be moved.
We assume that the image and plane are parallel.
First lets calculate the rotation. Put the tool in such position that you see it on the center of the image, move it on one axis select the point on the image that is corresponding to tool position.
Those two points will give you a vector in the image coordinate system.
The angle between this vector and original image axis will give the rotation.
The scale may be calculated in the similar way, knowing the vector length (in pixels) and the distance between the tool positions(in mm or cm) will give you the scale factor between the image and real world axis.
If this method won't provide enough accuracy you may calibrate the camera for distortion and relative position to the plane using computer vision techniques. Which is more complicated.
See the following links
http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html
http://dasl.mem.drexel.edu/~noahKuntz/openCVTut10.html