I have a problem when adding characters such as "Č" or "Ć" while generating a PDF. I'm mostly using paragraphs for inserting some static text into my PDF report. Here is some sample code I used:
var document = new Document();
document.Open();
Paragraph p1 = new Paragraph("Testing of letters Č,Ć,Š,Ž,Đ", new Font(Font.FontFamily.HELVETICA, 10));
document.Add(p1);
The output I get when the PDF file is generated, looks like this: "Testing of letters ,,Š,Ž,Đ"
For some reason iTextSharp doesn't seem to recognize these letters such as "Č" and "Ć".
THE PROBLEM:
First of all, you don't seem to be talking about Cyrillic characters, but about central and eastern European languages that use Latin script. Take a look at the difference between code page 1250 and code page 1251 to understand what I mean. [NOTE: I have updated the question so that it talks about Czech characters instead of Cyrillic.]
Second observation. You are writing code that contains special characters:
"Testing of letters Č,Ć,Š,Ž,Đ"
That is a bad practice. Code files are stored as plain text and can be saved using different encodings. An accidental switch from encoding (for instance: by uploading it to a versioning system that uses a different encoding), can seriously damage the content of your file.
You should write code that doesn't contain special characters, but that use a different notations. For instance:
"Testing of letters \u010c,\u0106,\u0160,\u017d,\u0110"
This will also make sure that the content doesn't get altered when compiling the code using a compiler that expects a different encoding.
Your third mistake is that you assume that Helvetica is a font that knows how to draw these glyphs. That is a false assumption. You should use a font file such as Arial.ttf (or pick any other font that knows how to draw those glyphs).
Your fourth mistake is that you do not embed the font. Suppose that you use a font you have on your local machine and that is able to draw the special glyphs, then you will be able to read the text on your local machine. However, somebody who receives your file, but doesn't have the font you used on his local machine may not be able to read the document correctly.
Your fifth mistake is that you didn't define an encoding when using the font (this is related to your second mistake, but it's different).
THE SOLUTION:
I have written a small example called CzechExample that results in the following PDF: czech.pdf
I have added the same text twice, but using a different encoding:
public static final String FONT = "resources/fonts/FreeSans.ttf";
public void createPdf(String dest) throws IOException, DocumentException {
Document document = new Document();
PdfWriter.getInstance(document, new FileOutputStream(DEST));
document.open();
Font f1 = FontFactory.getFont(FONT, "Cp1250", true);
Paragraph p1 = new Paragraph("Testing of letters \u010c,\u0106,\u0160,\u017d,\u0110", f1);
document.add(p1);
Font f2 = FontFactory.getFont(FONT, BaseFont.IDENTITY_H, true);
Paragraph p2 = new Paragraph("Testing of letters \u010c,\u0106,\u0160,\u017d,\u0110", f2);
document.add(p2);
document.close();
}
To avoid your third mistake, I used the font FreeSans.ttf instead of Helvetica. You can choose any other font as long as it supports the characters you want to use. To avoid your fourth mistake, I have set the embedded parameter to true.
As for your fifth mistake, I introduced two different approaches.
In the first case, I told iText to use code page 1250.
Font f1 = FontFactory.getFont(FONT, "Cp1250", true);
This will embed the font as a simple font into the PDF, meaning that each character in your String will be represented using a single byte. The advantage of this approach is simplicity; the disadvantage is that you shouldn't start mixing code pages. For instance: this won't work for Cyrillic glyphs.
In the second case, I told iText to use Unicode for horizontal writing:
Font f2 = FontFactory.getFont(FONT, BaseFont.IDENTITY_H, true);
This will embed the font as a composite font into the PDF, meaning that each character in your String will be represented using more than one byte. The advantage of this approach is that it is the recommended approach in the newer PDF standards (e.g. PDF/A, PDF/UA), and that you can mix Cyrillic with Latin, Chinese with Japanese, etc... The disadvantage is that you create more bytes, but that effect is limited by the fact that content streams are compressed anyway.
When I decompress the content stream for the text in the sample PDF, I see the following PDF syntax:
As I explained, single bytes are used to store the text of the first line. Double bytes are used to store the text of the second line.
You may be surprised that these characters look OK on the outside (when looking at the text in Adobe Reader), but don't correspond with what you see on the inside (when looking at the second screen shot), but that's how it works.
CONCLUSION:
Many people think that creating PDF is trivial, and that tools for creating PDF should be a commodity. In reality, it's not always that simple ;-)
If you are using the FontProvider, I managed to solve the display of the special characters by setting the registerShippedFreeFonts parameter to true:
FontProvider dfp = new DefaultFontProvider(true, true, false);
See also: https://itextpdf.com/en/resources/books/itext-7-converting-html-pdf-pdfhtml/chapter-6-using-fonts-pdfhtml
I met some problems when trying to make a PDF in French, for my project. It doesn't show special character like é , ò, ê.... (their code are for instance ê or ó)
So, thanks to this link I tried to include my own font but it gives this kind of messages:
PDF error: This font cannot be embedded in the PDF document. If you would like to
use it anyway, you must pass Zend_Pdf_Font::EMBED_SUPPRESS_EMBED_EXCEPTION in the
$options parameter of the font constructor
Do you have any idea to solve it? Thanks.
You can use default font. Just use encoding UTF-8 every time you draw a text.
$pdf = new Zend_Pdf();
$page = new Zend_Pdf_Page(Zend_Pdf_Page::SIZE_A4);
$page->drawText("Bonjour Hélène!", 705, 550,'UTF-8');
I recently discovered an issue with IE10. We have a web page that displays English text beside a translation in Japanese. Some of the Japanese characters display as squares. In the view source page all characters are correctly rendered. The database also has the characters correctly rendered. The unusual part is that when I block the characters with the cursor they convert to the correct characters.
IE10 I believe has a bug.
Anyone having similar issue or know of a fix? Checked all language settings, regional settings, browser font settings and many other tests. Nothing corrects this issue.
This issue was related to a dual byte character which some fonts and windows applications will support.
Some older fonts may use a two hex character representation to present a single character. Some fonts support this and some do not.
In this case the characters at issue were the following…..
ジ
シ and ゙
The latter two which I think are special characters that combined are intended to represent ジ.
The Unicode Standard from the Unicode ISO web site table defines them like so…..
Decimal Character HEX Name
12472 ジ 30B8 KATAKANA LETTER ZI
12471 シ 30B7 KATAKANA LETTER SI
12441 っ゙ 3099 COMBINING KATAKANA-HIRAGANA VOICED SOUND MARK (combined with small tu (っ))
So some fonts use 12471 + 12441 to make 12472. This is what I found. But the actual string has 12471 + 12441 and not 12472 or the hex: 0x30B7, 0x3099 and not 0x30B8.
Any time a font being used does not support this binding, a box is displayed. The challenge is that it may be as simple as someone creating a birthday card using a non-compliant UTF8 font that could cause a PC to not allow the character to display correctly.
When I try to parse a pdf file with pdf box in java which is generated with cups pdf, showing junk characters. But it works perfectly with common pdfs I checked font cups pdf shows FreeMono_00.ttf (but i didn't se such a font anywhere) and working pdf shows ArialMT.
Anything I want to do differently for parsing the pdfs generated using cups-pdf.
below is the code I'm using for parsing.
parser = new PDFParser(new FileInputStream(File file));
parser.parse();
COSDocument cosDoc = parser.getDocument();
PDFTextStripperpdfStripper = new PDFTextStripper();
PDDocument pdDoc = new PDDocument(cosDoc);
String parsedText = pdfStripper.getText(pdDoc);
output is getting like this
)LOH1DPHDVGW[W
6XEMHFWVXEMHFWVVDPSOH
0HVVDJHVHQGLQJGHWDLOVDORQJZLWKSULQWILOH
8VHU1DPH$EGXOUD]DN30
8VHU,'D#DFRP
just copy paste also gives like this
I'm only repeating what I've read... I'm inexperienced here. If there were more mavens answering PDF/PDFBox questions, I'd wait for one to answer.
I believe the font either doesn't contain Unicode tables at all, or has been embedded in the document without Unicode tables. If the text seems to be a simple substitution cipher for a single given document, that would tend to confirm this.
If the font is embedded, I think sometimes only an extract of the glyphs you are actually using is embedded. That's likely here, since the font is not installed on the system (you said), and the original FreeMono font is large - over 4000 glyphs. In this case, I fear that the correspondence between character and glyph may be document-dependent - but I'm speculating.
I'm using the TCPDF library to generate server-side PDFs daily in a cronjob. This library takes UTF8 strings from the DB and writes them into a PDF using the Arial Unicode MS font (also embedding it in the PDF).
To be able to use this font, I had to convert it to a PHP-friendly format following these instructions: http://www.tcpdf.org/fonts.php
However, while most of the languages seem right (glyphs are correct in Hebrew, Chinese, Japanese, Portuguese, etc.), Korean glyphs appear as squared boxes in the PDF.
I noticed many (hundreds of) errors while running the ttf2ufm binary described in the link above:
Previous entry type: M
Warning: **** closepath on empty path in glyph "_d_8235" ****
I'm suspecting this has to do with this issue (not being able to correctly convert those couple of hundred glyphs, thus resulting in an invalid font file).
Am I doing something wrong? Or is this just a limitation of this library?
The latest TCPDF version automatically convert fonts into TCPDF format using the addTTFfont() method. The old font programs and scripts were removed.
For example:
// convert TTF font to TCPDF format and store it on the fonts folder
$fontname = $pdf->addTTFfont('/path-to-font/FreeSerifItalic.ttf', 'TrueTypeUnicode', '', 96);
// use the font
$pdf->SetFont($fontname, '', 14, '', false);