Convert measured inches (and pixels) to float? - pdf

Inches = Pixels / dpi
I noticed that PDF Clown uses measurements in float: how do I convert inches and pixels to float for width, height, etc. to properly work in PDF? Does anybody have a mathematical formula for this?
1) Inches --> float
2) Pixels --> float

PDF's coordinate system is based on the concept of device independence, that is the ability to preserve the geometric relationship between graphics objects and their page, no matter the device they are rendered through. Such device-independent coordinate system is called user space. By default, the length of its unit (approximately) corresponds to a typographic point (1/72 inch), defining the so-called default user space.
But, as mkl appropriately warned you, the relation between user space and device space can be altered through the current transformation matrix (CTM): this practically means that the user space unit's length matches the typographic point only until skewing or scaling are applied! Furthermore, since PDF 1.6 default user space unit may be overridden setting the UserUnit entry in the page dictionary.
So, the short answer is that, in PDF, one inch corresponds to 72 default user space units (no CTM interference granted); on the other hand, as this coordinate system is (by-definition) device-independent, it doesn't make sense to reason about pixels -- they exist in discrete spaces of samples only, whilst PDF defines a continuous space of vector graphics which is agnostic about device resolution! See the following picture:
If you need to map into PDF some graphics which are natively expressed in pixels, then prior conversion to inches may be a sensible approach.
BTW, floating-point data type was chosen to represent user space measurements just because it was obviously the most convenient approximation to map such a continuum -- I guess that after this explanation you won't confuse measures with measurements any more.
An extensive description about PDF's coordinate system can be found in § 4.2 of the current spec.

Related

Vulkan swapchain format UNORM vs SRGB?

In a Vulkan program, fragment shaders generally output single-precision floating-point colors in the range 0.0 to 1.0 to each red/blue/green channel, and these are then written to (blended into) the swapchain image that is then presented to screen. The floating point values are encoded into bits according to the format of the swapchain image (specified when the swapchain is created).
When I change my swapchain format from VK_FORMAT_B8G8R8A8_UNORM to VK_FORMAT_B8G8R8A8_SRGB I observe that the overall brightness of the frames is greatly increased, and also there are some minor color shifts.
My understanding of the SRGB format was that it was a lot like the UNORM format just having a different mapping of floating point values to 8-bit integers, such that it had higher color resolution in some areas and less in others, but the actually meaning of the "pre-encoded" RGB floating-point values remained unchanged.
So I'm a little suprised about the brightness increase. Is my understanding of SRGB encoding wrong? and/or is such a brightness increase is expected vs UNORM?
or maybe I have a bug and a brightness increase is not expected?
Update:
I've observed that if I use SRGB swapchain images and also load my images/textures in VK_FORMAT_B8G8R8A8_SRGB format rather than VK_FORMAT_B8G8R8A8_UNORM then the extra brightness goes away. It looks the same as if I use VK_FORMAT_B8G8R8A8_UNORM swapchain images and load my images/textures in VK_FORMAT_B8G8R8A8_UNORM format.
Also, if I put the swapchain image into VK_FORMAT_B8G8R8A8_UNORM format and then load the images/textures with VK_FORMAT_B8G8R8A8_SRGB, the frames look extra dark / almost black.
Some clarity about what is going on would be helpful.
This is a colorspace and display issue.
Fragment shaders are assumed to be writing values in a linear RGB colorspace. As such, if you are rendering to an image that has a linear RGB colorspace (UNORM), the values your FS produces are interpreted directly. When you render to an image which has an sRGB colorspace, you are writing values from one space (linear) into another space (sRGB). As such, these values are automatically converted into the sRGB colorspace. It's no different from transforming a position from model space to world space or whatever.
What is different is the fact that you've been looking at your scene incorrectly. See, odds are very good that your swapchain's VkSurfaceFormat::colorSpace value is VK_COLOR_SPACE_SRGB_NONLINEAR_KHR.
VkSurfaceFormat::colorspace tells you how the display engine will interpret the pixel data in swapchain images you present. This is not a setting you provide; this is the display engine telling you something about how it is going to interpret the values you send it.
I say "odds are very good" that it is sRGB because, outside of extensions, this is the only possible value. You are rendering to an sRGB display device whether you like it or not.
So if you write to a UNORM image, the display device will read the actual bits of data and interpret them as if they are in the sRGB colorspace. This operation only makes sense if the data your fragment shader wrote itself is in the sRGB colorspace.
However, that's generally not how FS's generate data. The lighting computations you compute only make sense on color values in a linear RGB colorspace. So unless you wrote your FS to deliberately do sRGB conversion after computing the final color value, odds are good that all of your results have been in a linear RGB colorspace. And that's what you've been writing to your framebuffer.
And then the display engine mangles it.
By using an sRGB image as your destination, you force a colorspace conversion from linear RGB to sRGB, which will then be interpreted by the display engine as sRGB values. This means that your lighting equations are finally producing the correct results.
Failure to do gamma-correct rendering properly (including the source texture images which are almost certainly also in the sRGB colorspace, as this is the default colorspace for most image editors. The exceptions would be for things like gloss-maps, normal maps, or other images that aren't storing "colors".) leads to cases where linear light attention appears more correct than quadratic attenuation, even though quadratic is how reality works.
This is gamma correction.
Using a swapchain with VK_FORMAT_B8G8R8A8_SRGB leverages the ability to to apply gamma correction as the final step in your render pipeline. This happens for you automatically behind the scenes.
That is the only place you want gamma correction to happen. Make sure your shaders are not applying gamma correction. You might see it as:
color = pow(color, vec3(1.0/2.2));
If your swapchain does the gamma correction, you do not need todo it in your shaders.
Most images are SRGB (pictures, color textures, etc). Linear images are for specific data, like a blue noise texture or heightmap.
Load SRGB images w/ VK_FORMAT_R8G8B8A8_SRGB
Load LINEAR images w/ VK_FORMAT_R8G8B8A8_UNORM
No shader conversion is required if the rules outlined above are followed.

Is there a way to set the size of an EPS image which is embedded in a PostScript?

I have an EPS image (Encapsulated PostScript) and I embedded it in a PostScript. I'd like to set it's size for example in millimeters like 50x50 or something. I found a way to resize it with the scale keyword like
.7 .7 scale
but that way I only can give a rate to it not a concrete size.
Is there a way to do so?
PostScript is device independent, so you aren't supposed to try and set things to a specific size in terms of output. This allows devices to use differently sized media (for example including hardware margins), or different resolutions, and still produce the desired output.
Bearing in mind that EPS files are intended for use by applications, that will embed the EPS in their own output, its important that the EPS itself be scalable. The application can read the BoundingBox from the EPS file, and then scale that into its own co-ordinate system so that the EPS fits a specific size at a particular position on the output.
Basically, you need to work out what scale factor to use, by taking the EPS BoundingBox and working out what scale factor will fit that into the area you want it to cover on your output.

What exactly is UIFont's point size?

I am struggling to understand exactly what the point size in UIFont means. It's not pixels and it doesn't appear to be the standard definition of point which is that they relate to 1/72th inch.
I worked out the pixel size using -[NSString sizeWithFont:] of fonts at various sizes and got the following:
| Point Size | Pixel Size |
| ---------- | ---------- |
| 10.0 | 13.0 |
| 20.0 | 24.0 |
| 30.0 | 36.0 |
| 40.0 | 47.0 |
| 50.0 | 59.0 |
| 72.0 | 84.0 |
| 99.0 | 115.0 |
| 100.0 | 116.0 |
(I did [#"A" sizeWithFont:[UIFont systemFontOfSize:theSize]])
And looking at the 72.0 point size, that is not 1-inch since this is on a device with a DPI of 163, so 1-inch would be 163.0 pixels, right?
Can anyone explain what a "point" in UIFont terms is then? i.e. is my method above wrong and really if I used something else I'd see something about the font is 163 pixels at 72 point? Or is it purely that a point is defined from something else?
A font has an internal coordinate system, think of it as a unit square, within which a glyph's vector coordinates are specified at whatever arbitrary size accommodates all the glyphs in the font +- any amount of margin the font designer chooses.
At 72.0 points the font's unit square is one inch. Glyph x of font y has an arbitrary size in relation to this inch square. Thus a font designer can make a font that appears large or small in relation to other fonts. This is part of the font's 'character'.
So, drawing an 'A' at 72 points tells you that it will be twice as high as an 'A' drawn at 36 points in the same font - and absolutely nothing else about what the actual bitmap size will be.
ie For a given font the only way to determine the relationship between point size and pixels is to measure it.
I am not sure how -[NSString sizeWithFont:] measures the height. Does it use line height or the difference between the peaks of the beziers? What text did you use?
I believe -[UIFont lineHeight] would be better to measure the height.
Edit:
Also, note that none of the measurement methods returns the size in pixels. It returns the size in points. You have to multiply the result by [UIScreen mainScreen].scale.
Note the difference between typographic points used when constructing the font and points from iOS default logical coordinate space. Unfortunately, the difference is not explained very clearly in the documentation.
I agree this is very confusing. I'm trying to give you some basic explanation here to make the things clearer.
First, the DPI (dot-per-inch) thing comes from printing, on physical papers. So does font. The unit point was invented to discribe physical printing size of text, just because inch is too large for usual text sizes. Then people invented point, that is the length of 1/72 inch (actually evolved in the history), to describe text size easily. So yes, if you are writing a document in Word or other word processing software for printing, you will get absolutely one-inch-height text if you use 72pt font.
Second, the theoretical text height is usually different from the rendered strokes you can actually see by your eyes. The original text height idea came from the actual glyphs used for printing. All letters are engraved on glyph blocks, which share the same height – which matches the font point height. However, depending on different letters and different font design, the actual visible part of the text may a little bit shorter than the theoretical height. Helvetica Neue is actually very standard. If you measure the top of a letter "k" to the bottom of a letter "p", it will match the font height.
Third, computer display screwed up DPI, as well as the definition of point at the same time. The resolution of computer displays are described by their native pixels, such as 1024 x 768 or 1920 x 1080. Software actually doesn't care the physical size of your monitors, because everything would be very fuzzy if they scale screen content like printing on paper — just the physical resolution is not high enough to make everything smooth and legit. Software uses a very simple and dead way: Fixed DPI for whatever monitor you use. For Windows, it's 96DPI; for Mac, it's 72DPI. That's said, no matter how many pixels make an inch on your monitor, software just ignores it. When the operating system renders text in 72pt, it would be always 96px high on Windows and 72px high on Mac. (That's why Microsoft Word documents always look smaller on Mac and you usually need zoom to 125%.)
Finally on iOS, it's very similar, no matter it's iPhone, iPod touch, iPad or Apple Watch, iOS uses the fixed 72DPI for non-retina screen, 144DPI for #2x retina display, and 216DPI for #3x retina display used on iPhone 6 Plus.
Forget about the real inch. It only exists on actual printing, not for displaying. For software displaying text on your screen, it's just an artificial ratio to physical pixels.
I first wondered if this had something to do with the way [CSS pixels are defined at 96 per "inch"][1] while UI layout points are defined at 72 per "inch". (Where, of course, an "inch" has nothing to do with a physical inch.) Why would web standards factor into UIKit business? Well, you may note when examining stack traces in the debugger or crash reports that there's some WebKit code underlying a lot of UIKit, even when you're not using UIWebView. Actually, though, it's simpler than that.
First, the font size is measured from the lowest descender to the highest ascender in regular Latin text -- e.g. from the bottom of the "j" to the top of the "k", or for convenient measure in a single character, the height of "ƒ". (That's U+0192 "LATIN SMALL LETTER F WITH HOOK", easily typed with option-F on a US Mac keyboard. People used it to abbreviate "folder" way back when.) You'll notice that when measured with that scheme, the height in pixels (on a 1x display) matches the specified font size -- e.g. with [UIFont systemFontOfSize:14], "ƒ" will be 14 pixels tall. (Measuring the capital "A" only accounts for an arbitrary portion of the space measured in the font size. This portion may change at smaller font sizes; when rendering font vectors to pixels, "hinting" modifies the results to produce more legible onscreen text.)
However, fonts contain all sorts of glyphs that don't fit into the space defined by that metric. There are letters with diacritics above an ascender in eastern European languages, and all kinds of punctuation marks and special characters that fit in a "layout box" much larger. (See the Math Symbols section in Mac OS X's Special Characters window for plenty of examples.)
In the CGSize returned by -[NSString sizeWithFont:], the width accounts for the specific characters in the string, but the height only reflects the number of lines. Line height is a metric specified by the font, and related to the "layout box" encompassing the font's largest characters.
The truth, as far as I have been able to ascertain, is that UIFont lies. All of UIKit takes liberties with fonts. If you want the truth you need to use CoreText, but in a lot of cases it will be slower! (So in the case of your pixel height table I think it was that it adds some sort of a + bx factor where x is point size.
So why does it do this? Speed! UIKit rounds up stuff and fiddles with spacing so that it can cache bitmaps. Or at least that was my take away!

XCode Coordinates for iPad Retina Displays

I just noticed an interesting thing while attempting to update my app for the new iPad Retina display, every coordinate in Interface Builder is still based on the original 1024x768 resolution.
What I mean by this is that if I have a 2048x1536 image to have it fit the entire screen on the display I need to set it's size to 1024x768 and not 2048x1536.
I am just curious is this intentional? Can I switch the coordinate system in Interface Builder to be specific for Retina? It is a little annoying since some of my graphics are not exactly 2x in either width or height from their originals. I can't seem to set 1/2 coordinate numbers such as 1.5 it can either be 1 or 2 inside of Interface Builder.
Should I just do my interface design in code at this point and forget interface builder? Keep my graphics exactly 2x in both directions? Or just live with it?
The interface on iOS is based on points, not pixels. The images HAVE to be 2x the size of the originals.
Points Versus Pixels In iOS there is a distinction between the coordinates you specify in your drawing code and the pixels of the
underlying device. When using native drawing technologies such as
Quartz, UIKit, and Core Animation, you specify coordinate values using
a logical coordinate space, which measures distances in points. This
logical coordinate system is decoupled from the device coordinate
space used by the system frameworks to manage the pixels on the
screen. The system automatically maps points in the logical coordinate
space to pixels in the device coordinate space, but this mapping is
not always one-to-one. This behavior leads to an important fact that
you should always remember:
One point does not necessarily correspond to one pixel on the screen.
The purpose of using points (and the logical coordinate system) is to
provide a consistent size of output that is device independent. The
actual size of a point is irrelevant. The goal of points is to provide
a relatively consistent scale that you can use in your code to specify
the size and position of views and rendered content. How points are
actually mapped to pixels is a detail that is handled by the system
frameworks. For example, on a device with a high-resolution screen, a
line that is one point wide may actually result in a line that is two
pixels wide on the screen. The result is that if you draw the same
content on two similar devices, with only one of them having a
high-resolution screen, the content appears to be about the same size
on both devices.
In your own drawing code, you use points most of the time, but there
are times when you might need to know how points are mapped to pixels.
For example, on a high-resolution screen, you might want to use the
extra pixels to provide extra detail in your content, or you might
simply want to adjust the position or size of content in subtle ways.
In iOS 4 and later, the UIScreen, UIView, UIImage, and CALayer classes
expose a scale factor that tells you the relationship between points
and pixels for that particular object. Before iOS 4, this scale factor
was assumed to be 1.0, but in iOS 4 and later it may be either 1.0 or
2.0, depending on the resolution of the underlying device. In the future, other scale factors may also be possible.
From http://developer.apple.com/library/ios/#documentation/2DDrawing/Conceptual/DrawingPrintingiOS/GraphicsDrawingOverview/GraphicsDrawingOverview.html
This is intentional on Apple's part, to make your code relatively independent of the actual screen resolution when positioning controls and text. However, as you've noted, it can make displaying graphics at max resolution for the device a bit more complicated.
For iPhone, the screen is always 480 x 320 points. For iPad, it's 1024 x 768. If your graphics are properly scaled for the device, the impact is not difficult to deal with in code. I'm not a graphic designer, and it's proven a bit challenging to me to have to provide multiple sets of icons, launch images, etc. to account for hi-res.
Apple has naming standards for some image types that minimize the impact on your code:
https://developer.apple.com/library/ios/#DOCUMENTATION/UserExperience/Conceptual/MobileHIG/IconsImages/IconsImages.html
That doesn't help you when you're dealing with custom graphics inline, however.

resolution from a PDFPage?

I have a PDF document that is created by creating NSImages with size in 72dpi pts, each has a single representation which is measured in pixels. I then put these images into PDFPages with initWithImage, and then save the document.
When I open the document, I need the resolution of the original image. However, all of the rectangles that PDFPage gives me are measured in points, not pixels.
I know that the information is in there, and I suppose I can try to parse the PDF data myself, by going through the voyeur.app example... but that's a WHOLE lot of effort to do something that should be pretty normal...
Is there an easier way to do this?
Added:
I've tried two techniques:
get the PDFRepresentation data from
the page, and use it to make a new
NSImage via initWithData. This
works, however, the image has both
size and pixel size in 72dpi.
Draw the PDFPage into a new
off-screen context, and then get a
CGImage from that. The problem is
that when I'm making the context, it
appears that I need to know the size
in pixels already, which defeats
part of the purpose...
There are a few things you need to understand about PDF:
The PDF Coordinate system is in
points (1/72 inch) by default.
The PDF Coordinate system is devoid of resolution. (this is a white lie - the resolution is effectively the limits of 32 bit floating point numbers).
Images in PDF do not inherently have any resolution attached to them (this is a white lie - images compressed with JPEG2000 still have resolution in their embedded metadata).
An Image in PDF is represented by an object that contains a series of samples that are stored using some compression filter.
Image objects can be rendered on a page multiple times at any size.
Since resolution is defined as the number of pixels (or samples) per unit distance, resolution only means something for a particular rendering of an image on a page. So if you are rendering a particular image to fill the page, then the resolution in dpi is
xdpi = image_width / (pageWidthInPoints / 72.0);
ydpi = image_height / (pageHeightInPoints / 72.0);
If the image is not being rendered to the full size of the page, a complete solution is very tricky. Adobe prescribes that images should be treated as being 1x1 and that you change the page transformation matrix to determine how to render them. The means that you would need the matrix at the point of rendering the image and you would need to push the points (0,0), (0, 1), (1,0) through the matrix. The Euclidean distance between (0, 0)' and (1, 0)' will give you the width in points and the Euclidean distance between (0, 0)' and (0, 1)' will give you the height in points.
So how do you get that matrix? Well, you need the content stream for the page and you need to write a PDF interpreter that can rip the content stream and keep track of changes to the CTM. When you reach your image, you extract the CTM for it.
To do that last step should be about an hour with a decent PDF toolkit, provided you are familiar with the toolkit. Writing that toolkit is several person years of work.