How do you change the name of the font embedded in the .ttf file? (It's for a device that's expecting a hard coded font name that I'd like to swap w/ another more readable openly licensed font).
I'd prefer a method which I can implement myself rather than installing a program.
TrueType is a pretty complex binary data format -- the kind that takes an entire book-length spec to describe. I've worked with it in the distant past.
There are specialized tools that can edit fonts, including metadata like names. I would not recommend trying to mess with the binary data in a font file without such a tool. There might be libraries available that you could call to manipulate TrueType data; if one existed, I would guess Python would be the most likely language to find it in, because there's a long correlation between font hackers and Python (Guido van Rossum's brother is a well-known typographer.)
This may be only useful in very specific situations, but should you need to change a font's name to something else that is the exact same length, you can do so in a hex editor (e.g. Okteta. Find all the instances of the name, and then edit them to be the new name. I found there were 2 copies of the name in each place - one that's normal, and another with 0x00 in between each letter.
The only evidence I have that this actually works is empirical with a sample size of 1.
Related
I want to generate a Portable Document Format (PDF) by an original program of mine.
I am going to experiment an original typesetting program, and in the course of development I want to avoid external tools and fonts as far as possible.
So, it would be ideal to avoid using XeTeX, LuaTeX, among other engines.
And I want to store the glyph information internally in my program or my library.
But where should the character code be specified in the PDF so that the viewer program knows when they are copied or searched?
To generate glyphs, my naive approach is to save, in local library, raster images or Bézier curve parameters that correspond to the characters.
According to the PDF Reference, that seems well possible.
I do not care for kerning, ligature, or other aesthetics virtues for my present purpose, or at least that can be dealt later.
Initially, I think I may generate a Postscript, and use Ghostscript to convert that to PDF.
But it is pointed out here that Postscript does not support Unicode, which I will certainly use.
My option is then reduced to directly generating PDF from scratch.
My confusion is, though my brute-force approach may render correctly, I guess the resulting PDF would be such that the viewer is unable to copy, nor search, since I would have specified nowhere about the character codes.
In PDF Reference p.122, we see that there are several different objects.
What seems relevant are text objects, path objects, and image objects.
Is it possible to associate a image object to its character code?
As I recall, there are some scanned PDF, for example the freely-previewed parts of scanned Google-Books, in which you can copy strings correctly.
What is the method or field specifying that?
However, I think in various tables that follows the PDF Reference, there is no suitable slot for Unicode code.
Similarly, it is not clear how to associate a path object to its character code.
If this can be done, the envisioned project would be easiest, since I just extract out some open source fonts' Bézier curve parameters (I believe that can be done) and translate them myself to the PDF-allowed format.
If both image- and path-objects are impossible to hold character codes, I conclude that a text object is (obviously) more suitable for representing a glyph together with its character code.
Maybe a more correct way would be embedding a custom font, synthesized in runtime, in the PDF.
This is mentioned verbally and briefly in p.364, sec. 5.8, "Embedded Font Programs".
That does seem rather difficult and requires tremendous research.
I would like that you recommend some tutorials for embedding fonts, and they are not easy to find.
In fact, I find exemplary PDF files are itself already scarce, as most of them seems to come in LZ-compressed binary files (I guess).
Indeed, I try to compile a "Hello world" PDF in non-Computer-Modern font, and open with a text editor, and all I see is blanks, control characters, and Mojibake-like strings.
In summary, how do I (if possible) represent a glyph by a text object, image object, or a path object so that is character code can be known?
For concreteness, can you generate a PDF so that: there is shown a circle, but when you copy that, you copy the character "A"?
The association between the curves and the character code is the font. There are several tables involved that do the mappings. The font has an Encoding vector which is indexed by the character code and yields a Glyph name. For copying out of the document, there must also be a ToUnicode vector which maps to unicode code points.
If you study a simple example of a PostScript Type 3 font, that should be very beneficial in understanding a PDF font. I have a short one in this calendar program.
To answer the bold question, if you convert gridcal.ps to pdf, copying the moon glyph results in the character 1 because it is in the ascii position for 1 in the Encoding vector. Some other of the glyphs, notably sun, mars and venus are recognized by Ghostscript, which produces a mapping to the Unicode character. This is very clever, but probably not sufficiently extensive to rely upon (indeed, moon, mercury, jupiter and saturn are not recognized).
I tried to copy text from a PDF file but get some weird characters. Strangely, Okular can recoqnize the text, but not with Sumatra PDF or Adobe, all three applications are installed in Windows 10 64 bit. To better explain my issue, here is the video https://streamable.com/sw1hc. The "text layer workaround file" is one solution I got. Any help is greatly appreciated. Regards
In short: The (original) PDF does not contain the information required for regular text extraction as described in the PDF specification. Depending on the exact nature of your task, you might try to add the required information to the existing text objects and fonts or you might go for OCR.
Mapping character codes to Unicode as described in the PDF specification
The PDF specification ISO 32000-1 (and similarly ISO 32000-2, too) describes an algorithm for mapping character codes to Unicode values using information available directly inside the PDF.
It has been quoted very often in other stack overflow answers (see here, here, here, here, here, or here), so I won't quote it here again.
Essentially this is the algorithm used by Adobe Acrobat during copy&paste and also by many other text extractors.
In PDFs which don't contain the information required for text extraction, you eventually get to this point in the algorithm:
If these methods fail to produce a Unicode value, there is no way to determine what the character code represents in which case a conforming reader may choose a character code of their choosing.
What happens if the algorithm above fails to produce a Unicode value
This is where the text extraction implementations differ, they try to determine the matching Unicode value by using heuristics or information from beyond the PDF or applying OCR to the glyph in question.
That the different programs you tried returned so different results shows that
your PDF does not contain the information required for the algorithm above from the PDF specification and
the heuristics used by those programs differ relevantly and Okular's heuristics work best for your document.
What to do in such a case
There are multiple options, more or less feasible depending on your concrete case:
Ask the source of the PDF for a version that contains proper information for text extraction.
Unless you have a contract with that source that requires them to supply the PDFs in a machine readable form or the source is otherwise obligated to do so, they usually will decline, though...
Apply OCR to the PDF in question.
Depending on the quality of the OCR software and the glyphs in the PDF, the results can be of a questionable quality; e.g. in your "PDF copy text issue-Text layer workaround.pdf" the header "Chapter 1: Derivative Securities" has been recognized as "Chapter1: Deratve Securites"...
You can try to interactively add manually created ToUnicode maps to the PDF, e.g. as described by Tilman Hausherr in his answer to "how to add unicode in truetype0font on pdfbox 2.0.0".
Depending on the number of different fonts you have to create the mappings for, this approach might easily require way too much time and effort...
Say I'm distributing a file that I want to be secret, and I assign each person that I give the file a unique id.
How can I embed this id in the file so that I can determine who leaks my file?
Some file formats have a section in which I can put information that won't render the file corrupt. But this is easily detectable by looking at the specific section, or by changing the information.
I would guess that any solution is identifiable by byte comparison, but I was wondering if there exists solutions that embed the id in a part that if changed, renders the file corrupt. (I would guess this would be file format specific, but this question is to learn about techniques, so I'd gladly read about specific cases.)
Thanks!
For image files and Unicode text you may use Steganography.
For audio files there are special watermarking algorithms that add noise not heard by humans.
You may use metadata to add watermarks, but they can be easily removed by end user.
See at what is currently possible in this SO question: Good library for Digital watermarking
How to include and use new fonts in wxWidgets projet?
I am using VS2005.
I just want to print text using new ttf font.
Thanks in advance!!
Unless you're willing to link against something like FreeType:
http://en.wikipedia.org/wiki/FreeType
...most any program is going to require the font to be installed to the operating system, by the user or by some OS-specific script. You can't just load it by filename off the cuff in your app.
Because of the platform dependence of naming and accessing custom fonts, the path of least resistance is not to try and hardcode a font...but to let the user pick one out of a dialog. You would use a wxFontDialog for this:
http://docs.wxwidgets.org/stable/wx_wxfontdialog.html
It will let you retrieve the wxFontData, from which you can get the chosen wxFont:
http://docs.wxwidgets.org/stable/wx_wxfontdata.html#wxfontdatagetchosenfont
Once you have that, you can save and reload an identity of the font via the native string interface:
http://docs.wxwidgets.org/stable/wx_wxfont.html#wxfontgetnativefontinfodesc
Trying to formulate these strings on your own or work with the "face name" is a little dodgier:
http://docs.wxwidgets.org/stable/wx_wxfont.html#wxfontsetfacename
Generally speaking a lot of the same problems arise here as dealing with fonts in HTML. If you have a very specific idea about the cross-platform appearance of some text, your best bet is often to make an image out of that text and use that instead of going through the hoops to get the font you want in the app. If you're more flexible and have a lot of text the user is interested in, then they may be interested in changing the font too. So just use a default but offer the user a choice to pick anything they want which is installed on their system.
(Note: I personally consider the handling of fonts in pretty much every OS or document system to be a disgrace. Imagine a world where in order to get a graphic to display in your program you had to register it with the operating system through a complex process and it would not copy from machine to machine when you copied a document in which it was embedded. We're dealing now with graphics that are orders of magnitude larger than font files, and yet they are handled seamlessly while people seem to accept the lack of seamless font transfer as "normal". Archaic DRM mindsets of font vendors is one side of the problem, but lame technology is another big component.)
Like the title says. Reason I ask is that we're converting PDFs to formatted ASCII text (using pdftotext) and only want to display the ones that look reasonably sane.
PPT files tend to have text over images, diagonal text and others things that don't translate to ASCII very well, so we'd like to filter them out if we can.
The creating application of a PDF is listed in its XMP metadata. You can see this quite easily in Acrobat 9 (and I believe earlier): go to File > Properties, click Additional Metadata..., then go to Advanced and it's listed under both XMP Core Properties and PDF Properties:
xmp:CreatorTool: Microsoft PowerPoint
pdf:Creator: Microsoft PowerPoint
I'm guessing you want to find this programatically, so you'll need to find a library to read this metadata that works with your language. Here is a list of some XMP tools.
Short answer:
No, I don't think so.
Long answer:
No, I don't think so, because there are may ways to convert a PowerPoint file to pdf, for example Adobe Acrobat and PDFCreator and many many others. It's up to the converters to embed specific information in the PDF file, even if you find a way to detect PowerPoint-source pdf from one convert, the same method may not work for another.
Even longer answer:
No, I don't think so, because of the reasons described in the "long answer". And I don't think detecting the source of the PDF is the best approach to the problem you are trying to solve. Not just PowerPoint produces overlapped text and images. I think it's much better to detect the actual layout of the PDF file. If there are overlay of image and text, then you do some filtering or pre-processing to cater for that.
Your reasoning is very arbitrary - there are surely plenty of PPT files without the features you describe, and plenty of PDF files with them, that were generated from another source.
In theory a better method would just be to detect when these "unwanted" situations occur. However, even though the PDF format is partly open (only for reading, apparently, so it's not truly an open format), extracting complex data like that would be incredibly difficult.
All PDFs can have this problem regardless of their source. Most desktop publishing suites are capable of outputting PDF and are often sold boasting their high quality and flashier PDF presentations ...
A "saner" method would be to use a PDF parser, ITextSharp, or pdfNet...etc, Using the library of your choice, find all image rectangles, and all text rectangles, SORT THE RECTANGLES, and then see if there is substantial overlap of text and image rects -- ignoring image to image overlaps. If so, reject the page and/or document.
That won't be perfect, but at least it's going to catch many PDFs that aren't sane, regardless of source. Other heuristics to add would include color analysis. (i.e. are the colors in the overlapping region sufficiently different to allow "sane" results?)
Best of luck to you
It might put its name in the creator or producer info, but I don't have a copy to check this theory with.
In general, it is not an easy task to programmatically determine (reliably) where a file came from or how it was generated based on its contents. After all, a file is just a collection of bits.
Unless you have a lot of resources to expend building the heuristics to determine whether a file looks "reasonably sane" according to your needs, I would consider this a task for human beings.
some converter from ppt to pdf preserve creator in comments at begin of pdf.
I think that PDF's generated from most applications seem to be the same. It may have some meta-data that you can read from the file...