I needed to convert some PDF back to text. I tried many soft and online tools and result was always mediocre.
Why is it so difficult technically speaking ?
Let's not assume you are talking about PDFs which merely wrap some bitmap image because it should be clear that in that case you can only resort to OCR with all its restrictions.
Let's instead assume that text is drawn in the PDF at hand.
What is drawn on a PDF page is determined by a sequence of instructions in the content stream of that page. "Text is drawn" on a page means that among those instructions there are some setting the font to use by the instructions to come, some setting the text position and direction to use by the instructions to come, and some actually drawing text given by "string arguments".
Text extraction is the task of taking the sequence of instructions from a content stream and instead of drawing the text as indicated by the font and position setting instructions, to export it in a sensible order using a standard encoding, usually the encoding of the character type of the used programming language / platform.
The first problem is to understand the encoding of the string arguments of those text drawing instructions:
each font can have its own encoding; to extract the text one cannot simply ignore everything but the instructions drawing text and concatenate their string contents, you always have to take the current font into account (some extremely simple text extractors ignore this and, therefore, fail pretty often to return something sensible);
there are a large number of predefined encodings, some reminding of encodings you know, e.g. WinAnsiEncoding, many you likely don't know, e.g. Add-RKSJ-H; these encodings may use a constant number of bytes per glyph or they may be mixed-multibyte; so a text extractor must support very many encodings to start with;
encodings also may be completely ad-hoc and arbitrary; in particular in case of embedded subset fonts one often sees ad-hoc encodings generated by dealing out character codes from some starting value whenever one is needed; i.e. the first glyph in a given font used on a page is given the starting value as code, the next, different glyph is given the starting value plus one, the next, different one the starting value plus two, etc; "Hello World" and a starting value of 48 (ASCII value of '0') would result in "01223453627"; these fonts may contain a mapping to Unicode but they are not required to.
The next problem is to make sense out of the order of the strings:
the string drawing instructions may occur in an arbitrary order, e.g "Hello" might be drawn "lo" first, then after moving back "el", then after again moving back "H"; to extract the text one cannot ignore text positioning instructions and simply concatenate text strings, you always have to take the current position into account (some simple text extractors ignore this and, therefore, can fail to return something sensible);
multi-columnar text may present a difficulty, text may be drawn line by line, e.g. first the text of the top line of the first column, then the top line of the second column, then the second line of the first column, then the second line of the second column, etc.; there need not be any hints in the PDF that the text is multi-columnar.
Another problem is to recognize formatting or styling artifacts:
spaces between words need not be created by drawing a space glyph, it may also be done by text position changing instructions; text extractors not trying to recognize gaps created by text positioning instructions may return a result without spaces; on the other hand the same technique can be used to draw adjacent glyphs at an optimal distance, aka kerning; text extractors trying to recognize gaps created by text positioning instructions may falsely return spaces where there should be none;
sometimes selected words are printed s p a c e d o u t for extra emphasis; in the extracted text these gaps might be presented as space characters which automatic postprocessing of the text may see as word separators;
usually for bold text one uses a different, bold font program; if that is not at hand, people sometimes get creative and emulate bold by printing the same text twice with a minute offset; with a slightly larger offset (or a different transformation) and a different color a shadow effect can be emulated; if the text extractor does not try to recognize this, you end up having some duplicate characters in the output.
More problems arise due to incomplete or wrong extra information:
ToUnicode maps of fonts (optional maps from character code to Unicode) may be incomplete or contain errors; there e.g. are many questions here on stack overflow dealing with incorrect ToUnicode maps for Indian writings; the text extraction results reflect these errors;
there even are PDFs with contradictory information, e.g. with an error in the ToUnicode map but the correct information in an ActualText entry; this is used by some PDF creators to allow correct copy&paste from some programs (preferring an ActualText entry in such a situation) while injecting errors in the output of other programs (preferring ToUnicode information then).
Yet another problem arises if you expect the text extractor to extract only text eventually visible in the page:
text may be drawn outside the current clipping area or outside the visible page area; text extractors need to keep these in mind;
text may be drawn using the rendering mode "invisible"; text extractors have to keep an eye on the rendering mode;
text may be drawn using the same color as the background; to recognize this, a text extractor can not only look at the current instruction and a few graphics state details, it has to take into account anything drawn beforehand in the location of the text;
text may be drawn as a clip path; to recognize whether this text is visible in the end, a text extractor must keep track of what is drawn in the text area as long as the clip path is active;
text may be covered by something else later; a text extractor must drop recognized text in such a case; but depending on blend modes and transparency settings these coverings might or might not allow the text to shine through; thus, for a correct result the text extractor must for each glyph keep track of the color its drawn with, the color of the backdrop, and what all those spiffy effects do with those colors later on; and of course, both glyph color and backdrop color can be interesting, e.g. some shading colors; and the color spaces involved may differ, requiring one to convert back and forth between color spaces; and so on.
Furthermore, text may be drawn where text extractors usually don't look:
some tools hide text from text extraction by putting it into a pattern and filling the page area with that pattern;
similarly there are type 3 fonts; each character in a type 3 font is represented by its own content stream; thus, a tool can draw all text in the content stream of a single type 3 font glyph and then draw that glyph on the page.
...
You surely have meanwhile gotten an idea why text extraction results can be less than optimal. And be assured, the list above is not complete, there still are more complications for text extraction.
I used CoreText to render text as below:
Another very common typesetting operation is drawing a single line of text to use as a label for a user-interface element.
In Core Text this requires only two lines of code, one to create the line object with an attributed string and another to draw the line into a graphic context.
but it shows how to create an attributes dictionary and use it to create.
obvious there're 3 paragraphs. and I use default CTParagraphStyleSetting so that the ParagraphSpacing and ParagraphSpacingBefore is set to 0 by default.
But the rendered result shows the space is too HUGE
Any idea to reduce the paragraph space?
This might help:
Technical Q&A QA1698 - How do I work-around an issue where some lines in my Core Text output have extra line spacing?
u can try
kCTParagraphStyleSpecifierMinimumLineHeight
kCTParagraphStyleSpecifierMaximumLineHeight
kCTParagraphStyleSpecifierLineSpacing
I've got a very frustrating sizer problem.
I have two wxFlexGridSizers (and a few other things) inside a vertical wxBoxSizer, like so:
mMainSizer->Add(topsizer, wxSizerFlags(0).Expand());
mMainSizer->Add(1, lineheight);
mMainSizer->Add(mTypeLabel);
mMainSizer->Add(mTypeSizer, wxSizerFlags(0).Expand());
mMainSizer->Add(1, lineheight);
Each wxFlexGridSizer is filled using the same code:
sizer->Add(label, wxSizerFlags(1).Expand());
sizer->Add(fieldwidth, 1); // To separate label and data
sizer->Add(data, wxSizerFlags(0).Border(wxRIGHT, rborder).Right());
But the wxFlexGridSizers aren't being Expanded to the same width, as I intend. The lower one, with smaller labels, is always narrower than the upper one, leaving the data fields misaligned between them. Since they were both added with the Expand() flag, the narrower one should expand to the same width as the wider one, right?
(I've even tried adding the Right() flag to the lower one too, when adding it to the wxBoxSizer, but it did nothing, which really confused me.)
Can anyone save my sanity by pointing out where I'm going wrong?
EDIT: As far as I can tell, this is a wxWidgets bug. The Expand flag should tell items in a vertical sizer to expand themselves to their maximum width. If I'm wrong, someone please correct me.
As it turns out, the bug was mine. I thought I'd given the wxFlexGridSizers a growable column, with wxFlexGridSizer::AddGrowableCol, but that must have been in an earlier iteration of the code. Once I'd done that, they expanded just as I wanted them to.
I have spent the weekend working on a personal project and got stuck here. Basically, I need to turn
[0;37m[33m o0==============================~o[0]o~==============================0o
into
o0==============================~o[0]o~==============================0o (only this text would be yellow now)
Using cocoa's regex functionality, I was able to find and capture the "[0;", "37m" and "[33m" individually. the "0;" indicates the server's desire for any previous text styling to be removed and returned to the default which is black background and white text. The "37m" indicates that the server would like for text to be colored white (not sure why this is here, but this is what the server sends). The final "33m" indicates that the server wants the text to be colored yellow. My code correctly finds, strips out, and identifies the requested color changes in the string, but I am having trouble applying these colors to the NSAttributedString I create. The ranges supplied by the regex searches are no longer valid once I strip out the color sequences in the final string, what is an effective way to figure out where the color changes should be applied to the stripped string? In this example, all the color codes are supplied at the beginning, but in other cases, the color codes could be in the middle to cause the string to change color mid-line. NSAttributedString can handle this if I could figure out the proper ranges to assign the requested colors to.
Now that Lion is out I can post the answer. Basically you can use the fancy regex abilities in Lion to figure out what is up. The code to do this (which needs to be refactored, but at least it works) can be found here:
https://github.com/sgoodwin/Turbo-Mud/blob/experiment/Turbo%20Mud/Turbo_MudAppDelegate.m
I would really like to see a proportional font IDE, even if I have to build it myself (perhaps as an extension to Visual Studio). What I basically mean is MS Word style editing of code that sort of looks like the typographical style in The C++ Programming Language book.
I want to set tab stops for my indents and lining up function signatures and rows of assignment statements, which could be specified in points instead of fixed character positions. I would also like bold and italics. Various font sizes and even style sheets would be cool.
Has anyone seen anything like this out there or know the best way to start building one?
I'd still like to see a popular editor or IDE implement elastic tabstops.
Thinking with Style suggests to use your favorite text-manipulation software like Word or Writer. Create your programme code in rich XML and extract the compiler-relevant sections with XSLT. The "Office" software will provide all advanced text-manipulation and formatting features.
i expected you'll get down-modded and picked on for that suggestion, but there's some real sense to the idea.
The main advantage of the traditional 'non-proportional' font requirement in code editors is to ease the burden of performing code formatting.
But with all of the interactive automatic formatting that occurs in modern IDE's, it's really possible that a proportional font could improve the readability of the code (rather than hampering it, as i'm sure many purists would expect).
A character called Roedy Green (famous for his 'how to write unmaintainable code' articles) wrote about a theoretical editor/language, based on Java and called Bali. It didn't include non-proportional fonts exactly, but it did include the idea of having non-uniform font-sizes.
Also, this short Joel Spolsky post posts to a solution, elastic tab stops (as mentioned by another commentor) that would help with the support of non-proportional (and variable sized) fonts.
#Thomas Owens
I don't find code formatted like that easier to read.
That's fine, it is just a personal preference and we can disagree. Format it the way you think is best and I'll respect it. I frequently ask myself 'how should I format this or that thing?' My answer is always to format it to improve readability, which I admit can be subjective.
Regarding your sample, I just like having that nicely aligned column on the right hand side, its sort of a quick "index" into the code on the left. Having said that, I would probably avoid commenting every line like that anyway because the code itself shouldn't need that much explanation. And if it does I tend to write a paragraph above the code.
But consider this example from the original poster. Its easier to spot the comments in the second one in my opinion.
for (size-type i = 0; i<v.size(); i++) { // rehash:
size-type ii = has(v[i].key)%b.size9); // hash
v[i].next = b[ii]; // link
b[ii] = &v[i];
}
for (size-type i = 0; i<v.size(); i++) { // rehash:
size-type ii = has(v[i].key)%b.size9); // hash
v[i].next = b[ii]; // link
b[ii] = &v[i];
}
#Thomas Owens
But do people really line comments up
like that? ... I never try to
line up declarations or comments or
anything, and the only place I've ever
seen that is in textbooks.
Yes people do line up comments and declarations and all sorts of things. Consistently well formatted code is easier to read and code that is easier to read is easier to maintain.
I wonder why nobody actually answers your question, and why the accepted answer doesn't really have anything to do with your question. But anyway...
a proportional font IDE
In Eclipse you can cchoose any font on your system.
set tab stops for my indents
In Eclipse you can configure the automatic indentation, including setting it to "tabs only".
lining up function signatures and rows of assignment statements
In Eclipse, automatic indentation does that.
which could be specified in points instead of fixed character positions.
Sorry, I don't think Eclipse can help you there. But it is open source. ;-)
bold and italics
Eclipse has that.
Various font sizes and even style sheets would be cool
I think Eclipse only uses one font and font-size for each file type (for example Java source file), but you can have different "style sheets" for different file types.
When I last looked at Eclipse (some time ago now!) it allowed you to choose any installed font to work in. Not so sure whether it supported the notion of indenting using tab stops.
It looked cool, but the code was definitely harder to read...
Soeren: That's kind of neat, IMO. But do people really line comments up like that? For my end of line comments, I always use a single space then // or /* or equivalent, depending on language I'm using. I never try to line up declarations or comments or anything, and the only place I've ever seen that is in textbooks.
#Brian Ensink: I don't find code formatted like that easier to read.
int var1 = 1 //Comment
int longerVar = 2 //Comment
int anotherVar = 4 //Command
versus
int var2 = 1 //Comment
int longerVar = 2 //Comment
int anotherVar = 4 //Comment
I find the first lines easier to read than the second lines, personally.
The indentation part of your question is being done today in a real product, though possibly to even a greater level of automation than you imagined, the product I mention is an XSLT IDE, but the same formatting principles would work with most (but not all) conventional code syntaxes.
This really has to be seen in video to get the sense of it all (sorry about the music back-track). There's also a light XML editor spin-off product, XMLQuire, that serves as a technology demonstrator.
The screenshot below shows XML formatted with quite complex formatting rules in this XSLT IDE, where all indentation is performed word-processor style, using the left margin - not space or tab characters.
To emphasise this formatting concept, all characters have been highlighted to show where the left-margin extends to keep indentation. I use the term Virtual Formatting to describe this - it's not like elastic tab stops, because there simply are no tabs, just margin information which is part of the 'paragraph' formatting (RTF codes are used here). The parser reformats continuously, in the same pass as syntax coloring.
A proportional font hasn't been used here, but it could have been quite easily - because the indentation is set in TWIPS. The editing experience is quite compelling because, as you refactor the code (XML in this case), perhaps through drag and drop, or by extending the length of an attribute value, the indentation just re-flows itself to fit - there's no tab-key or 'reformat' button to press.
So, the indentation is there, but the font work is a more complex problem. I've experimented with this, but found that if fonts are re-selected as you type, the horizontal shifting of the code is too distracting - there would need to be a user-initiated 'format fonts' command probably. The product also has Ink/Handwriting technology built-in for annotating code, but I've yet to exploit this in the live release.
Folks are all complaining about comments not lining up.
Seems to me that there's a very simple solution: Define the unit space as the widest character in the font. Now, proportionally space all characters except the space. the space takes up as much room so as to line up the next character where it would be if all preceeding characters on the line were the widest in the font.
ie:
iiii_space_Foo
xxxx_space_Foo
would line up the "Foo", with the space after the "i" being much wider than after the "x".
So call it elastic spaces. rather than tab-stops.
If you're a smart editor, treat comments specially, but that's just gravy
Let me recall arguments about using the 'var' keyword in C#. People hated it, and thought it would make code less clear. For example, you couldn't know the type in something like:
var x = GetResults("Main");
foreach(var y in x)
{
WriteResult(x);
}
Their argument was, that you couln't see if x was an array, an List or any other IEnumerable. Or what the type of y was. In my opinion the unclearity did not arise from using var, but from picking unclear variable names. Why not just type:
var electionResults = GetRegionalElactionResults("Main");
foreach(var result in electionResults)
{
Write(result); // you can see what you're writing!!
}
"But you still cannot see the type of electionResults!" - does it really matter? If you want to change the return type of GetRegionalElectionResults, you can do so. Any IEnumerable will do.
Fast forward to now. People want to align comments en similar code:
int var2 = 1; //The number of days since startup, including the first
int longerVar = 2; //The number of free days per week
int anotherVar = 38; //The number of working hours per week
So without the comment everything is unclear. And if you don't align the values, you cannot seperate them from the variales. But do you? What about this (ignore the bullets please)
int daysSinceStartup = 1; // including first
int freeDaysPerWeek = 2;
int workingHoursPerWeek = 38;
If you need a comment on EVERY LINE, you're doing something wrong. "But you still need to align the VALUES" - do you? what does 38 have to do with 2?
In C# Most code blocks can easily be aligned using only tabs (or acually, multiples of four spaces):
var regionsWithIncrease =
from result in GetRegionalElectionResults()
where result.TotalCount > result > PreviousTotalCount &&
result.PreviousTotalCount > 0 // just new regions
select result.Region;
foreach (var region in regionsWithIncrease)
{
Write(region);
}
You should never use line-to-line comments and you should rarely need to vertically align things. Rarely, not never. So I understand if some of you guys prefer a monospaced font. I prefer the readibility of font Noto Sans or Source Sans Pro. These fonts are available freely from Google, and resemble Calibri, but are designed for programming and thus have all the neccesary characteristics:
Big : ; . , so you can clearly see the difference
Clearly distinct 0Oo and distinct Il|
The major problem with proportional fonts is they destroy the vertical alignment of the code and this is a fairly major loss when it comes to writing code.
The vertical alignment makes it possible to manipulate rectangular blocks of code that span multiple lines by allowing block operations like cut, copy, paste, delete and indent, unindent etc to be easily performed.
As an example consider this snippet of code:
a1 = a111;
B2 = aaaa;
c3 = AAAA;
w4 = wwWW;
W4 = WWWW;
In a mono-spaced font the = and the ; all line up.
Now if this text is loded into Word and display using a proportional font the text effectively turns into this:
NOTE: Extra white space added to show how the = and ; no longer line up:
a1 = a1 1 1;
B2 = aaaa;
c3 = A A A A;
w4 = w w W W;
W4 = W W W W;
With the vertical alignment gone those nice blocks of code effectively disappear.
Also because the cursor is no longer guaranteed to move vertically (i.e. the column number is not always constant from one line to the next) it makes it more difficult to write throw away macro scripts designed to manipulated similar looking lines.