Capitals, some other letters elevated above baseline in TTF - truetype

When I use TrueType fonts with SDL (and I don't know if SDL has anything to do with it), some of the letters will be elevated above baseline. With High Tower Text font, capital letters and lower-case 't' are noticeably above baseline:
In Arial, it's just the 't':
But Tahoma, and Times, don't seem to have a problem:
It is consistent between machines, so I doubt it's the fonts.
I'm using the SDL_TTF library. Rendering is done thus:
SDL_Surface* surfaceToPrintOn;
surfaceToPrintOn = TTF_RenderText_Solid (ttfFontPointer, textToPrint, color);
I never see this problem using these fonts in other programs. How can I get them to look flat and neat?
TIA.

Related

Thai character not rendered correctly in PDF

My app should be able to output a PDF file containing the user guide in several supported languages. (I'm using pdfkit)
I had some troubles finding a suitable font for Thai: some so-called Thai supported languages (included Noto Thai from Google) would output squares, question marks or even worse unreadable stuff.
After a bit of research, I found one that seemed to work reasonably well, until our Thai guy noted that the charachters
ต่ำ
were rendered like in the picture below, basically with the two elements above the first character collapsed with one covering the other
I'm using Nimbus Sans Thai Family downloaded from myfonts.com that, by the way, would seem able to render those characters correctly, as you might appreciate trying to copypaste ต่ำ in the preview input
Any hints?
Your font is incomplete in a certain way. It lacks some glyphs that usually reside in Private Use Area (PUA) of Unicode.
Some applications (I'm aware of Microsoft Word) can manually overcome this problem, but your rendering app (and Adobe Acrobat Viewer) does not.
You should either find a font with these glyphs presenting or alternatively find an application that would displace the existing glyphs manually.
Many fonts, despite they claim supporting Thai (and they, indeed, contain "regular" Thai glyphs), can be incomplete.
Besides canonic glyphs, a well-formed font should contain a "Private Use
Area" (PUA) subrange that contains glyphs in non-canonical forms. Those
glyphs include:
Tone marks shifted to the upper position for use in combination with upper
vowels (SARA_I, SARA_UE, etc) and shifted in a lower position in case of Consonant + Tone Mark and no upper vowel;
Tone marks and upper-vowels slightly shifted to the left for use in combination with PO_PLA, FO_FAN, etc (otherwise it would overlap with the consonants' upper tail);
also, both effects combined, e.g. the tone mark shifted down-left at the same time:
Special glyphs for YO_YING and THO_THAN (with no tail) for use in combination with under-vowels;
Several more;
Normally, when a rendered app finds above mentioned symbol combinations, it looks for substitute glyphs in PUA area. If not found, it simply falls back to default glyph, which happens in your case.
Here are two screenshots of PUA areas of Arial Unicode and FreeSerif
which are self-explanatory: FreeSerif has PUA empty. I think, the same problem occurs with your Nimbus font.
And the final observation. Incorrect fonts can be incorrect in different ways. Above I have described a more canonical case when the standard positions of tone marks a upper positions, while non-standard positions are shifted down (or are absent, which constitutes an incomplete font).
There are, however, fonts that behave the opposite way; they (only) contain tone marks in lower positions. This is what you seem to observe.
The problem is that PDFKit does not perform complex script rendering.
Several scripts such as arabic, thai etc, require glyph substitution and re-positioning depending on context (position in string, neighbor characters) and PDFKit seems not to do it.
PDF viewer applications display exactly what is defined in the PDF file. The Nimbus Sans Thai font probably includes all the required glyphs but what bytebuster explains in his answer needs to be performed by PDFKit and not by the viewer application.

Prevent anti-aliasing (or sub-pixel rendering) of a TrueType font

This is how the .ttf font is rendered:
I have created this vector-only TrueType font using FontForge.
I want to use this font on applications which require vector-based glyphs, and do not support loading .ttf embedded bitmaps (which do not seem to have this problem).
On certain color-schemes this sub-pixel rendering that Windows does makes the font completely unreadable. This effect is present in most ttf fonts, but is much stronger on fonts with pixel-perfect edges like mine.
Does anybody know any programmable hinting tricks or font-settings that will allow the font to render pixel-perfectly instead of with this red/blue halo? I would like the font to work properly without OS modifications to disable ClearType or similar.
To clarify, this is a question about leveraging the TrueType Instruction Set, or changing a TrueType font-setting (not a System/Application setting) that I have may have neglected to set properly, to make the font render legibly (if possible).
Working Solution
Credit goes to Brian Nixon for posting the solution URL, and to Erik Olofsson for researching and posting the solution on his blog.
Erik Olofsson provides a solution that forces Windows font API to prioritize .ttf embedded bitmaps to be used with priority over glyphs when rendering.
The solution can be found in detail at http://www.electronicdissonance.com/2010/01/raster-fonts-in-visual-studio-2010.html
Solution Summary
Add 'Traditional Chinese' code page to the OS/2 Panpose table.
Use the 'ISO 106046-1' (Unicode, UCS-2) encoding.
Include glyphs for the following seemingly random Hiragana characters:
い - U+3044
う - U+3046
か - U+304B
ひ - U+3057
の - U+306E
ん - U+3093
This list is not a joke
On certain color-schemes this sub-pixel rendering that Windows does makes the font completely unreadable.
It sounds as if ClearType is not correctly calibrated.
"Pixel-perfect" display is possible only when the text color matches a color plane of the display. For black or grayscale text, that means a grayscale display (high-resolution and expensive digital monochrome displays are popular in the medical imaging field, for example).
Otherwise, you run into the fundamental fact that color components are physically separated on the display. The concept of ClearType is to adjust the image to compensate for the actual physical offset between color planes.
Printed media with high-precision registration is the closest you get to multiple color planes without any offset.
Now, it does still make sense to disable ClearType in some cases -- when the image is intended to be saved in a file rather than presented on the local display, disabling ClearType can produce results that are legible across a wider range of displays and also compress better. (But for best results, send vectors and let the end-user display compensate for its particular sub-pixel structure)
In GDI, control of ClearType is set via the LOGFONT structure that command text-drawing functions which font family, size, and attributes to use. In GDI+, use SetTextRenderingHint on a Graphics instance.
Because the use of ClearType is set by the application at the same time as size, weight, and other attributes, your font is subject to being requested both with and without. However, ClearType is not compatible with all fonts, by forcing incompatibility you will avoid ClearType for your font alone.
The LOGFONT documentation has the following remarks about ClearType:
The following situations do not support ClearType antialiasing:
Text is rendered on a printer.
Display set for 256 colors or less.
Text is rendered to a terminal server client.
The font is not a TrueType font or an OpenType font with TrueType outlines. For example, the following do not support ClearType antialiasing: Type 1 fonts, Postscript OpenType fonts without TrueType outlines, bitmap fonts, vector fonts, and device fonts.
The font has tuned embedded bitmaps, for any font sizes that contain the embedded bitmaps. For example, this occurs commonly in East Asian fonts.
In addition, the gasp table within the TTF format has several fields specified to influence ClearType usage.
Documentation at https://www.microsoft.com/typography/otspec/gasp.htm
and https://fontforge.github.io/fontinfo.html#gasp
And of course, make sure that the "optimized for ClearType" bit in the head table is not set.

How to stop GhostScript replacing Times-Roman with NimbusRomNo9L-Regu (on Windows)

I am using Ghostscript v9.07 on Windows 7 (x64) to convert PostScript files into PDFs. Inside the PostScript files, the fonts Times-Roman and Times-Bold are used - this can be seen in the PostScript plain-text:
/Times-Roman findfont 24.00 scalefont setfont
.
.
.
/Times-Bold findfont 24.00 scalefont setfont
.
.
.
(This is extremely common in PostScript output from TeX.)
When I convert the PostScript to PDF using Ghostscript, Ghostscript uses the Nimbus-Roman font-family.
GPL Ghostscript 9.07 (2013-02-14)
Copyright (C) 2012 Artifex Software, Inc. All rights reserved.
This software comes with NO WARRANTY: see the file PUBLIC for details.
Loading NimbusRomNo9L-Regu font from %rom%Resource/Font/NimbusRomNo9L-Regu... 39
38016 2393624 2695216 1387174 4 done.
Loading NimbusRomNo9L-Medi font from %rom%Resource/Font/NimbusRomNo9L-Medi... 39
38016 2453738 2695216 1390537 4 done.
This output suggests that it is loading the font-family from resources embedded in the executable. In any case, it isn't loading it from the PostScript file so it is either from the application's distribution or the operating system.
The result looks abysmal in every PDF viewer I have tried, regardless of font-smoothing settings.
I cannot understand it. The resulting PDF's text is not rasterised - you can select it and cut and paste it and the file-size alone shows that it is still real text - so I can only conclude that this is a quirk of the Nimbus-Roman typeface.
There is absolutely no reason to use that typeface but, no matter what I do, I cannot seem to get Ghostscript to use a Times-Roman (or equivalent) typeface from the Windows font set.
I tried the -sFONTPATH switch (alternatively called sFONTDIR on some sites - I tried both) to no avail and I went looking for the fontmap file to try hacking that but didn't find one for Windows - only fontmaps for other operating systems in the install directory.
How can I change this behaviour and stop Ghostscript from using the awful font?
The way to have Ghostscript use a specific font, rather than a substitute, is to embed the font in the input.
This:
/Times-Roman findfont 24.00 scalefont setfont
merely requests a font (and size), it does not include the font data. In the absence of the font, Ghostscript is forced to use a substitute font.
On Windows, Ghostscript uses the ROM file system, and does not ship with the support files. If you want to use those you can pull them from Git, and then use the -I switch. That will allow you to alter cidfmap which controls the CIDFonts known to Ghostscript, and fontmap.GS which controls the regular fonts known.
However, I don't see the result you do with the apparently bitmapped font. Perhaps you should try a newer version of Ghostscript. Or post an original PostScript program and converted PDF file so it would be possible to examine them.
[edit starts here]
OK so basically there's some problems with what you've put up above. The screenshot you have there is from page 1, but the 'font substitution', ie the loading of NimbusRoman-Regular and NimbusRoman-Bold (on current versions of Ghostscript) take place on page 5. Clearly the 'substitution' on page 5 can't affect what's on page 1. You would have seen that if you hadn't set -dBATCH, moral; when debugging a problem, keep the command line simple.
The actual 'problem' is that the PostScript program uses a type 3 bitmap font for all the text, the font being defined in the body of the program.
In short, the text is a crappy bitmap font because that's the way the PostScript has been created, it contains a crappy bitmap font and insists on using it (FWIW it appears to have been rendered at 300 dpi).
In fact the only text in the document which is scalable is the very text which uses Times-Roman (not a substitution, it uses the font from the interpreter instead of the bitmap font).
This is the text for the figure at the top of the page; 'point', 'p', 'X', 'line', 'Y', 'u', 'W'. This has been included as an EPS (originally called ray_space.eps) into the document, which is why it uses real scalable fonts. If you zoom in on the top of page 5 you will see that the scalable fonts continue to look smooth (the body of the figure) while the bitmapped font (the figure title for example) look bitmapped.
There's nothing you can do about that, essentially, other than recreate the PostScript from the original DVI document. This is not a bug (or at least, not a Ghostscript bug).
[and another edit...]
I'm afraid its not really simple.
There are 26 fonts named /Fa to /Fx, I am unsure why this is the case, since each font is capable of containing 255 glyphs yet most do not (at the same time, they don't just contain two glyphs either). There seems no good reason for the fonts being encoded as they are.
The program uses each font as it requires it, switching fonts when it needs different glyphs. Eg:
1 0 bop
392 319 a Fx (An)21 b(In)n(tro)r(duction)g(to)h(Pro)t(jectiv)n
(e)h(Geometry)670 410 y(\(for)f(computer)f(vision\))
815
561 y Fw(Stan)c(Birc)o(h\014eld)74 754 y Fv(1)67 b(In)n(tro)r(duction)
74 856 y Fu(W)l(e)
The 'Fx' and 'Fw' calls load the font dictionary which is then used by the following code. Frankly the Tex PostScript is difficult to figure out, one might almost think it was designed to be obfuscated.
Identifying which 2-byte pair beginning with 'F' require removing or replacing, and which do not (eg ASCII encoded image data) would be tricky. In addition the glyphs are rendered flipped in the y axis (to match the transformation matrix in use) so you would have to adjust the FontMatrix similarly for a substitute font.
If the fonts in these TeX font dictionaries always contained the same glyphs, it would be possible to define a set of substitute fonts (/Fa to /Fx) which could be used instead. These would need to be loaded into the TeXDict dictionary to replace the ones defined in the header of the progtam of course.
I'm not saying its impossible, but it would be a good amount of research effort to figure out what would be possible, and then a good amount of coding to implement it.

How to convert italic font to normal font in pdf using some library?

Is there any way to convert Italic font, Bold font in my pdf to normal font using some library like Imagemagick or GhostScript etc. ?
Basically the answer is 'no' though there are several levels of caveat in there.
The most common scenario for a PDF file is that it contains an embedded font, and that font is subset. In this case the font will use a custom Encoding, so that when you see 'Hello' on your monitor, the actual character codes might be 'Axtte' or similar gibberish. If the font also contain a ToUnicode table you could, technically, create an embedded subset of the regular font from the same family as the bold or italic and embed that, and it would work. This would be an immense amount of work.
If the font isn't subset then it may not contain a custom Encoding, which would make that task easier, because you wouldn't have to re-encode the replacement.
If the font isn't embedded, then you need only change the font name in the Font object, because the PDF consumer will have to find a substitute anyway.
Note that, because PDF is a binary format, with an index (xref) containing the offset of every object in the file, any changes will mean that the xref table has to be reconstructed, again a considerable task.
I'm not aware of any tools which would do any of this for you automatically, you'd have to write your own, though some things could be done automatically. MuPDF for example will 'fix' a PDF file which has an incorrect xref table for you.
And even after all that, the likelihood is that the spacing would be different for the italic or bold font compared to the regular font anyway, and would look peculiar if you replaced them with a regular font.
So, fundamentally, no.
In low-level PDF you can apply some rendering flags in front of a text stream. Like the "Rendering Mode" Tr operation. For instance, in this scenario you can include the rendering of text outline and increase outline drawing width with the command sequence 0.4 w 2 Tr which will cause Normal text to become more "bold" (There are other better ways to accomplish this using the Font Description dictionary). However, one can also employ this tactic to slim down bold text using a clipped thicker outline, but this may not be ideal.
As for italic, most fonts contain a metric indicating their italic angle, and you can use this to add a faux italic using a shear CTM transformation matrix with the cm operation. Once again, this may work better to add an italic shear, but may also have some success in removing it.
See the PDF Reference.
This will require a library with lower level PDF building and you would have to do it manually, but it is possible technically.

Disable subpixel antialiasing of text in PDFs

I'm generating PDFs. One font in particular - Helvetica Neue Light - renders particularly poorly when the PDF is viewed on a Mac, thanks to subpixel antialiasing.
I want to disable subpixel antialiasing in the PDF. (Not in the PDF viewer.) The equivalent in CSS is -webkit-font-smoothing: antialiased.
Bad:
Good:
Is this possible?
No.
Somewhat longer answer? :)
Antialiasing is a device specific optimisation (anti-aliasing would be done differently on a printing device (if at all), on a monitor and on an LCD monitor for example).
PDF is by design a device agnostic file format and as such it stays away (tries to stay away) from device specific constructs. Anti-aliasing settings are by definition preferences from the viewer and cannot be influenced by the PDF file itself.
The result you're showing is quite different indeed - you're sure the font is accurately embedded in the PDF file (and is of decent quality)? It would also be worthwhile to look a bit further at the "On a Mac" statement. I'm assuming you're saying it looks bad in "Preview" on a Mac; in that case it's be interesting to see how it performs in Preview versus Adobe Acrobat. That could tell you whether it's a font issue or an application issue.