What exactly is UIFont's point size? - objective-c

I am struggling to understand exactly what the point size in UIFont means. It's not pixels and it doesn't appear to be the standard definition of point which is that they relate to 1/72th inch.
I worked out the pixel size using -[NSString sizeWithFont:] of fonts at various sizes and got the following:
| Point Size | Pixel Size |
| ---------- | ---------- |
| 10.0 | 13.0 |
| 20.0 | 24.0 |
| 30.0 | 36.0 |
| 40.0 | 47.0 |
| 50.0 | 59.0 |
| 72.0 | 84.0 |
| 99.0 | 115.0 |
| 100.0 | 116.0 |
(I did [#"A" sizeWithFont:[UIFont systemFontOfSize:theSize]])
And looking at the 72.0 point size, that is not 1-inch since this is on a device with a DPI of 163, so 1-inch would be 163.0 pixels, right?
Can anyone explain what a "point" in UIFont terms is then? i.e. is my method above wrong and really if I used something else I'd see something about the font is 163 pixels at 72 point? Or is it purely that a point is defined from something else?

A font has an internal coordinate system, think of it as a unit square, within which a glyph's vector coordinates are specified at whatever arbitrary size accommodates all the glyphs in the font +- any amount of margin the font designer chooses.
At 72.0 points the font's unit square is one inch. Glyph x of font y has an arbitrary size in relation to this inch square. Thus a font designer can make a font that appears large or small in relation to other fonts. This is part of the font's 'character'.
So, drawing an 'A' at 72 points tells you that it will be twice as high as an 'A' drawn at 36 points in the same font - and absolutely nothing else about what the actual bitmap size will be.
ie For a given font the only way to determine the relationship between point size and pixels is to measure it.

I am not sure how -[NSString sizeWithFont:] measures the height. Does it use line height or the difference between the peaks of the beziers? What text did you use?
I believe -[UIFont lineHeight] would be better to measure the height.
Edit:
Also, note that none of the measurement methods returns the size in pixels. It returns the size in points. You have to multiply the result by [UIScreen mainScreen].scale.
Note the difference between typographic points used when constructing the font and points from iOS default logical coordinate space. Unfortunately, the difference is not explained very clearly in the documentation.

I agree this is very confusing. I'm trying to give you some basic explanation here to make the things clearer.
First, the DPI (dot-per-inch) thing comes from printing, on physical papers. So does font. The unit point was invented to discribe physical printing size of text, just because inch is too large for usual text sizes. Then people invented point, that is the length of 1/72 inch (actually evolved in the history), to describe text size easily. So yes, if you are writing a document in Word or other word processing software for printing, you will get absolutely one-inch-height text if you use 72pt font.
Second, the theoretical text height is usually different from the rendered strokes you can actually see by your eyes. The original text height idea came from the actual glyphs used for printing. All letters are engraved on glyph blocks, which share the same height – which matches the font point height. However, depending on different letters and different font design, the actual visible part of the text may a little bit shorter than the theoretical height. Helvetica Neue is actually very standard. If you measure the top of a letter "k" to the bottom of a letter "p", it will match the font height.
Third, computer display screwed up DPI, as well as the definition of point at the same time. The resolution of computer displays are described by their native pixels, such as 1024 x 768 or 1920 x 1080. Software actually doesn't care the physical size of your monitors, because everything would be very fuzzy if they scale screen content like printing on paper — just the physical resolution is not high enough to make everything smooth and legit. Software uses a very simple and dead way: Fixed DPI for whatever monitor you use. For Windows, it's 96DPI; for Mac, it's 72DPI. That's said, no matter how many pixels make an inch on your monitor, software just ignores it. When the operating system renders text in 72pt, it would be always 96px high on Windows and 72px high on Mac. (That's why Microsoft Word documents always look smaller on Mac and you usually need zoom to 125%.)
Finally on iOS, it's very similar, no matter it's iPhone, iPod touch, iPad or Apple Watch, iOS uses the fixed 72DPI for non-retina screen, 144DPI for #2x retina display, and 216DPI for #3x retina display used on iPhone 6 Plus.
Forget about the real inch. It only exists on actual printing, not for displaying. For software displaying text on your screen, it's just an artificial ratio to physical pixels.

I first wondered if this had something to do with the way [CSS pixels are defined at 96 per "inch"][1] while UI layout points are defined at 72 per "inch". (Where, of course, an "inch" has nothing to do with a physical inch.) Why would web standards factor into UIKit business? Well, you may note when examining stack traces in the debugger or crash reports that there's some WebKit code underlying a lot of UIKit, even when you're not using UIWebView. Actually, though, it's simpler than that.
First, the font size is measured from the lowest descender to the highest ascender in regular Latin text -- e.g. from the bottom of the "j" to the top of the "k", or for convenient measure in a single character, the height of "ƒ". (That's U+0192 "LATIN SMALL LETTER F WITH HOOK", easily typed with option-F on a US Mac keyboard. People used it to abbreviate "folder" way back when.) You'll notice that when measured with that scheme, the height in pixels (on a 1x display) matches the specified font size -- e.g. with [UIFont systemFontOfSize:14], "ƒ" will be 14 pixels tall. (Measuring the capital "A" only accounts for an arbitrary portion of the space measured in the font size. This portion may change at smaller font sizes; when rendering font vectors to pixels, "hinting" modifies the results to produce more legible onscreen text.)
However, fonts contain all sorts of glyphs that don't fit into the space defined by that metric. There are letters with diacritics above an ascender in eastern European languages, and all kinds of punctuation marks and special characters that fit in a "layout box" much larger. (See the Math Symbols section in Mac OS X's Special Characters window for plenty of examples.)
In the CGSize returned by -[NSString sizeWithFont:], the width accounts for the specific characters in the string, but the height only reflects the number of lines. Line height is a metric specified by the font, and related to the "layout box" encompassing the font's largest characters.

The truth, as far as I have been able to ascertain, is that UIFont lies. All of UIKit takes liberties with fonts. If you want the truth you need to use CoreText, but in a lot of cases it will be slower! (So in the case of your pixel height table I think it was that it adds some sort of a + bx factor where x is point size.
So why does it do this? Speed! UIKit rounds up stuff and fiddles with spacing so that it can cache bitmaps. Or at least that was my take away!

Related

Convert measured inches (and pixels) to float?

Inches = Pixels / dpi
I noticed that PDF Clown uses measurements in float: how do I convert inches and pixels to float for width, height, etc. to properly work in PDF? Does anybody have a mathematical formula for this?
1) Inches --> float
2) Pixels --> float
PDF's coordinate system is based on the concept of device independence, that is the ability to preserve the geometric relationship between graphics objects and their page, no matter the device they are rendered through. Such device-independent coordinate system is called user space. By default, the length of its unit (approximately) corresponds to a typographic point (1/72 inch), defining the so-called default user space.
But, as mkl appropriately warned you, the relation between user space and device space can be altered through the current transformation matrix (CTM): this practically means that the user space unit's length matches the typographic point only until skewing or scaling are applied! Furthermore, since PDF 1.6 default user space unit may be overridden setting the UserUnit entry in the page dictionary.
So, the short answer is that, in PDF, one inch corresponds to 72 default user space units (no CTM interference granted); on the other hand, as this coordinate system is (by-definition) device-independent, it doesn't make sense to reason about pixels -- they exist in discrete spaces of samples only, whilst PDF defines a continuous space of vector graphics which is agnostic about device resolution! See the following picture:
If you need to map into PDF some graphics which are natively expressed in pixels, then prior conversion to inches may be a sensible approach.
BTW, floating-point data type was chosen to represent user space measurements just because it was obviously the most convenient approximation to map such a continuum -- I guess that after this explanation you won't confuse measures with measurements any more.
An extensive description about PDF's coordinate system can be found in § 4.2 of the current spec.

Graphics.DrawString with high resolution bitmaps == LARGE TEXT

I have an app that creates a large bitmap and later the user can add some labels. Everything works great as long as the base bitmap is the default 96x96 resolution. If I bump it up to 300 for instance, then the text applied with Graphics.DrawString is much too large - a petite size 8 or 10 font displays like it is 20.
On the one hand, it makes sense given the resolution increase, but on the other, you'd think the Fonts would scale. MeasureString returns a larger size when measured on a 300 vs 96 dpi bitmap, which wasn't really what I expected.
I've tried tricking it by creating a small bitmap of the appropriate size, printing to it, then pasting that to the master image. But when pasted to the high res it enlarges the pasted image.
The only other thing I can think of is to create a high res temp bitmap, print to it, then shrink it before pasting to the main image. That seems like a long way to go. Is there a compositing or overlay type setting that allows this? Are font sizes only true for a 96 dpi canvas?
Thanks for any hints/advice!
The size of a font is expressed in inches. One point is 1/72 inch. So if you draw into a bitmap that has 300 dots-per-inch then your font is going to use a lot more dots for the requested number of inches. So when you display it on a 300 dpi display then you'll get the size in inches back that you asked for.
Problem is, you are not displaying it a 300 dpi device, you are displaying it on a 96 dpi device. So it looks much bigger.
Clearly you don't really want a 300 dpi bitmap. Or you want to draw it three times smaller. Take your pick.
If you want a consistent size in pixels, specify UnitPixel when creating your Font object.

How to design Metro UIs with fonts that look good on any resolution?

When one looks at the Guidelines for fonts, we see that fonts are specified in points. A point is 1/72 of an inch so it is an absolute measure: a 10 points character should show at the exact same absolute size on any monitor at any resolution. That would make sense to me as I want to be able to read text -at the same size- whether on a 10 in tablet or a 23 in monitor. In other words, I want my text to be readable on a tablet, but I do not want it to be too big on a monitor.
On the other hand, I can understand that some UI elements could be specified in pixels, as in the Page layout guidelines.
However, in XAML font size is specified in pixels which is device dependent (to my understanding). Hence the fonts size will look tiny on a monitor with a higher resolution! See this post for more details. The answer in that post says "this way, you are getting a consistent font size". I can't see how I am getting a consistent size when it changes when the resolution changes?!?
Should I load different font size programmatically depending on the resolution of the device?
I see here that Windows does some scaling adjustment depending on the DPI. Will this adjustment be enough for my users to have a great experience on a tablet and on, say, a 20 inch monitor (or should I programmatically change the font size depending on the device resolution)?
Bonus question: WHY are the Font Guidelines written using points when the software tools do not use points (like, what were they thinking)?
"What were they thinking" is extensively covered in this blog post.
You'll also see described how scaling for pixel density is automatic:
For those who buy these higher pixel-density screens, we want to ensure that their apps, text, and images will look both beautiful and usable on these devices. Early on, we explored continuous scaling to the pixel density, which would maintain the size of an object in inches, but we found that most apps use bitmap images, which could look blurry when scaled up or down to an unpredictable size. Instead, Windows 8 uses predictable scale percentages to ensure that Windows will look great on these devices. There are three scale percentages in Windows 8:
100% when no scaling is applied
140% for HD tablets
180% for quad-XGA tablets
The percentages are optimized for real devices in the ecosystem. 140% and 180% may seem like odd scale percentage choices, but they make sense when you think about how this will work on real hardware.
For example, the 140% scale is optimized for 1920x1080 HD tablets, which has 140% of the baseline tablet resolution of 1366x768. These optimized scale factors maintain consistent layouts between the baseline tablet and the HD tablet, because the effective resolution is the same on the two devices. Each scale percentage was chosen to ensure that a layout that was designed for 100% 1366x768 tablets has content that is the same physical size and the same layout on 140% HD tablets or 180% quad-XGA tablets.

XCode Coordinates for iPad Retina Displays

I just noticed an interesting thing while attempting to update my app for the new iPad Retina display, every coordinate in Interface Builder is still based on the original 1024x768 resolution.
What I mean by this is that if I have a 2048x1536 image to have it fit the entire screen on the display I need to set it's size to 1024x768 and not 2048x1536.
I am just curious is this intentional? Can I switch the coordinate system in Interface Builder to be specific for Retina? It is a little annoying since some of my graphics are not exactly 2x in either width or height from their originals. I can't seem to set 1/2 coordinate numbers such as 1.5 it can either be 1 or 2 inside of Interface Builder.
Should I just do my interface design in code at this point and forget interface builder? Keep my graphics exactly 2x in both directions? Or just live with it?
The interface on iOS is based on points, not pixels. The images HAVE to be 2x the size of the originals.
Points Versus Pixels In iOS there is a distinction between the coordinates you specify in your drawing code and the pixels of the
underlying device. When using native drawing technologies such as
Quartz, UIKit, and Core Animation, you specify coordinate values using
a logical coordinate space, which measures distances in points. This
logical coordinate system is decoupled from the device coordinate
space used by the system frameworks to manage the pixels on the
screen. The system automatically maps points in the logical coordinate
space to pixels in the device coordinate space, but this mapping is
not always one-to-one. This behavior leads to an important fact that
you should always remember:
One point does not necessarily correspond to one pixel on the screen.
The purpose of using points (and the logical coordinate system) is to
provide a consistent size of output that is device independent. The
actual size of a point is irrelevant. The goal of points is to provide
a relatively consistent scale that you can use in your code to specify
the size and position of views and rendered content. How points are
actually mapped to pixels is a detail that is handled by the system
frameworks. For example, on a device with a high-resolution screen, a
line that is one point wide may actually result in a line that is two
pixels wide on the screen. The result is that if you draw the same
content on two similar devices, with only one of them having a
high-resolution screen, the content appears to be about the same size
on both devices.
In your own drawing code, you use points most of the time, but there
are times when you might need to know how points are mapped to pixels.
For example, on a high-resolution screen, you might want to use the
extra pixels to provide extra detail in your content, or you might
simply want to adjust the position or size of content in subtle ways.
In iOS 4 and later, the UIScreen, UIView, UIImage, and CALayer classes
expose a scale factor that tells you the relationship between points
and pixels for that particular object. Before iOS 4, this scale factor
was assumed to be 1.0, but in iOS 4 and later it may be either 1.0 or
2.0, depending on the resolution of the underlying device. In the future, other scale factors may also be possible.
From http://developer.apple.com/library/ios/#documentation/2DDrawing/Conceptual/DrawingPrintingiOS/GraphicsDrawingOverview/GraphicsDrawingOverview.html
This is intentional on Apple's part, to make your code relatively independent of the actual screen resolution when positioning controls and text. However, as you've noted, it can make displaying graphics at max resolution for the device a bit more complicated.
For iPhone, the screen is always 480 x 320 points. For iPad, it's 1024 x 768. If your graphics are properly scaled for the device, the impact is not difficult to deal with in code. I'm not a graphic designer, and it's proven a bit challenging to me to have to provide multiple sets of icons, launch images, etc. to account for hi-res.
Apple has naming standards for some image types that minimize the impact on your code:
https://developer.apple.com/library/ios/#DOCUMENTATION/UserExperience/Conceptual/MobileHIG/IconsImages/IconsImages.html
That doesn't help you when you're dealing with custom graphics inline, however.

What is the printing resolution of a XAML file?

I'm a designer and I like having a little control over the dimensions...
I am styling a XAML file that is meant to be printed.
Since dimensions are in pixels, I'd like to know which resolution I should base myself to calculate lengths (in cm)?
Thank you!
According to Charles, Silverlight is fixed at 96 DPI:
As you know, a Silverlight program normally sizes graphical objects
and controls entirely in units of pixels. However, when the printer is
involved, coordinates and sizes are in device-independent units of
1/96th inch. Regardless of the actual resolution of the printer, from
a Silverlight program the printer always appears to be a 96 DPI
device.
...
PrintPageEventArgs has two handy get-only properties that also report
sizes in units of 1/96th inch: PrintableArea of type Size provides the
dimensions of the area of the printable area of the page, and
PageMargins of type Thickness is the width of the left, top, right and
bottom of the unprintable edges. Add these two together (in the right
way) and you get the full size of the paper.
I did some quick searching, but couldn't turn up this info in the documentation. Leave it to Charles to know this sort of information.