wkhtmltopdf not printing ℃ in pdf - pdf

I'm converting html to pdf using wkhtmltopdf. wkhtmltopdf not printing ℃ (&#x2103) in pdf instead it prints dodgy character. Html file displaying ℃ but missing in pdf. I have got http-equiv="Content-Type" content="text/html; charset=UTF-8" in html file. Please suggest any ideas.

Related

NReco PhantomJs with arabic characters

Generating a report in c# using a wonderful tool NReco.PdfGenerator (with PhantomJs as engine) bumped into an issue with arabic symbols.
HTML layout is normally rendered in browser. But generating a PDF I got http://image.prntscr.com/image/73feca61ced346a094e2c652da4fea59.png
HTML has
Any ideas are welcomed
First of all ensure that you have the following meta tag in your HTML <head> section:
<meta http-equiv="content-type" content="text/html; charset=utf-8" />

Link to a html file within a repository from trac wiki

I am attempting to link from my wiki to web documentation within the source repository. This functionality I think is provided by using "export" links, e.g.
[export:path/to/file/index.html]
(after setting [browser] render_unsafe_content = yes)
However, when I do this I get the following browser error:
This page contains the following errors:
error on line 17 at column 10: Opening and ending tag mismatch: link line 0 and head
Below is a rendering of the page up to the first error.
(followed by a small snippet of the html page I'm trying to display).
I'm using trac v 1.1.
(see also this related question: How to link to html file in Trac)
UPDATE:
It seems using export does work with other html files, specifically, there's no problem with an xhtml document starting:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
but I am seeing issues with a (valid) html5 document starting:
<!DOCTYPE html>
<html lang="en">
<head>
which I think trac is attempting to parse as xml and then failing.
To create a hyperlink to a file within your repository you can simply use the following wiki syntax:
[browser:<path> <label>]
Where path is the path to the file within your repository and label is the text to be displayed for the link.

wkhtmltopdf footer-html encoding utf-8

wkhtmltopdf --encoding utf-8 is not working for --footer-html.
I am using the following command. Both HTML files are also in utf8 format.
wkhtmltopdf --dpi 120 -O Portrait --encoding 'utf-8' --footer-html /tmp/testFooter.html /tmp/testMain.html /tmp/testPDF.pdf
Both files have french characters. But in pdf footer have bad characters.
<html>
<head>
<title></title>
</head>
<body>
<div style="width:95%;font-size:9pt;font-family:Arial;">
<div style="border-top: 1px solid black;width: 100%;text-align: center;">
test - Guérin 691BANNE - FRANCE - SA au capital 0 Euros -737 729 - Téléphone : 86 03</div></div>
</body>
</html>
Following image is output
Try adding the following line in the HTML head element of the footer:
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
Make sure you have specific font installed there. I need Hindi (Indian Language) on my PDF files. putting meta tag had not solved my problem. So i installed hindi fonts on my debian box by
sudo apt-get install fonts-indic

Exporting PDF > HTML with Acrobat Pro, all special chars showing ? despite apparent UTF-8 encoding.

I have a set of PDFs that I am exporting to HTML files using the HTML 4.01 export option. When I open the files in SublimeText or Chrome, all special characters are showing as a ?. The declared encoding is UTF-8:
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html40/strict.dtd">
<!-- Created from PDF via Acrobat SaveAsXML -->
<!-- Mapping table version: 28-February-2003 -->
<HTML>
<HEAD>
<META http-equiv="Content-Type" content="text/html; charset=UTF-8">
The problem persists if I use the SublimeText "Save with encoding-" option, and save with UTF-8.
The odd thing is that this only happening with some PDFs. Others are saving with the correct encoding, but there is no difference in the export process. Is there anything I can check for in the PDF files themselves, or in the export process, to look into this?

rails 3 using iso-8859-1 entire project

I need a project in Rails 3 that is written in iso-8859-1 encoding.
The problem is my views. If I put some latin signal in them, it display like a black "?".
To solve I have to put #encoding: iso-8859-1 in each view file.
There is a way to tell all the project will be iso-8859-1?
I already try to change Application.rb file, but no success.
Thanks.
I suggest trying this in config/application.rb (perhaps you tried config.encoding before—see side note)
config.action_controller.default_charset = 'ISO-8859-1'
That should work in both Ruby 1.8 and Ruby 1.9.
Also make sure your HTML layout is synchronized:
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
Side note: You should probably leave config.encoding = 'UTF-8' alone. This is especially true for Ruby 1.8, where config.encoding is used to set $KCODE and doesn't like $KCODE = 'NONE', which is what you would have to put for ISO-8859-1.