Find text boundaries in PDF with many texts - pdf

I have a set (300k) of PDFs with multiple-choice questions (about 50 per PDF).
Each of this PDF may have a slight different layout, which makes it impossible to just convert to text (pdftotext) and match using REGEXP.
Question 1
WORDING
a) ALTERNATIVE_A
b) ALTERNATIVE_B
c) ALTERNATIVE_C
d) ALTERNATIVE_D
.
Q1) WORDING
a. ALTERNATIVE_A
b. ALTERNATIVE_B
c. ALTERNATIVE_C
d. ALTERNATIVE_D
e. ALTERNATIVE_E
On the other hand, all files have in common that its questions are near to its alternativas and far from other questions. This characteristic made me wonder if this is a computer vision task.
Is there any software which could aid me in this task?
Thanks!

Is your problem in getting the text or locating the questions?
If it is the former your problem can be solved using an OCR (Optical Character Recognition) software. Specifically you should look for one that works on PDFs such as this one:
http://www.onlineocr.net/
This could (if works properly) give you the text in the pdf which you could farther parse using
If your problem is locating the questions I would expect NLP techniques will work better than visual ones, but if you really want to do it using computer vision then I would suggest looking into bounding box detection/suggestion algorithms.

Related

Need a suitable scripting language for below requirement of PDF documents

We have project requirement to validate PDF files which would contain below things for different policies.
Page Number
Images (screen shots)
Here we want to validate whether all the pages have images(screen shots), number of images in the PDF, image duplication and empty pages.
Please suggest me a suitable scripting language and way to fulfill our requirement.
Note:- Each policy will have different set screen shots and hence the total no of pages and image content for each PDF will vary.
Thanks in Advance!
I've had to validate a lot of PDFs and found this toolkit very useful http://euske.github.io/pdfminer/index.html . It's written in Python, but comes with an excellent pdfdump utility which lets you look at the page number of each pages and all the elements in that page.
Having said that, I've only used it for text and am not sure how it identifies images.
I would comment on Kim Ryan's answer, except that I don't have enough reputation to comment yet, which seems pretty silly.
In any case, I agree with Kim that pdfminer is probably your best bet overall. However, I would mention that looking for images isn't all that difficult, and there is an "extract" example in the pdfrw library that will find images and pull them out to a separate PDF file. I don't think it would be very hard to modify it to match images to page numbers. I am the pdfrw author, so you can email me (address at github) if you have any questions on this.

How to use CFStringTokenizer with Chinese and Japanese?

I'm using the code here to split text into individual words, and it's working great for all languages I have tried, except for Japanese and Chinese.
Is there a way that code can be tweaked to properly tokenize Japanese and Chinese as well? The documentation says those languages are supported, but it does not seem to be breaking words in the proper places. For example, when it tokenizes "新しい" it breaks it into two words "新し" and "い" when it should be one (I don't speak Japanese, so I don't know if that is actually correct, but the sample I have says that those should all be one word). Other times it skips over words.
I did try creating Chinese and Japanese locales, while using kCFStringTokenizerUnitWordBoundary. The results improved, but are still not good enough for what I'm doing (adding hyperlinks to vocabulary words).
I am aware of some other tokenizers that are available, but would rather avoid them if I can just stick with core foundation.
[UPDATE] We ended up using mecab with a specific user dictionary for Japanese for some time, and have now moved over to just doing all of this on the server side. It may not be perfect there, but we have consistent results across all platforms.
If you know that you're parsing a particular language, you should create your CFStringTokenzier with the correct CFLocale (or at the very least, the guess from CFStringTokenizerCopyBestStringLanguage) and use kCFStringTokenizerUnitWordBoundary.
Unfortunately, perfect word segmentation of Chinese and Japanese text remains an open and complex problem, so any segmentation library you use is going to have some failings. For Japanese, CFStringTokenizer uses the MeCab library internally and ICU's Boundary Analysis (only when using kCFStringTokenizerUnitWordBoundary, which is why you're getting a funny break with "新しい" without it).
Also have a look at NSLinguisticTagger.
But by itself won't give you much more.
Truth be told, these two languages (and some others) are really hard to programatically tokenize accurately.
You should also see the WWDC videos on LSM. Latent Semantic Mapping. They cover the topic of stemming and lemmas. This is the art and science of more accurately determining how to tokenize meaningfully.
What you want to do is hard. Finding word boundaries alone does not give you enough context to convey accurate meaning. It requires looking at the context and also identifying idioms and phrases that should not be broken by word. (Not to mention grammatical forms)
After that look again at the available libraries, then get a book on Python NLTK to learn what you really need to learn about NLP to understand how much you really want to pursue this.
Larger bodies of text inherently yield better results. There's no accounting for typos and bad grammar. Much of the context needed to drive logic in analysis implicit context not directly written as a word. You get to build rules and train the thing.
Japanese is a particularly tough one and many libraries developed outside of Japan don't come close. You need some knowledge of a language to know if the analysis is working. Even native Japanese people can have a hard time doing the natural analysis without the proper context. There are common scenarios where the language presents two mutually intelligible correct word boundaries.
To give an analogy, it's like doing lots of look ahead and look behind in regular expressions.

Extracting information from PDFs of research papers [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I need a mechanism for extracting bibliographic metadata from PDF documents, to save people entering it by hand or cut-and-pasting it.
At the very least, the title and abstract. The list of authors and their affiliations would be good. Extracting out the references would be amazing.
Ideally this would be an open source solution.
The problem is that not all PDF's encode the text, and many which do fail to preserve the logical order of the text, so just doing pdf2text gives you line 1 of column 1, line 1 of column 2, line 2 of column 1 etc.
I know there's a lot of libraries. It's identifying the abstract, title authors etc. on the document that I need to solve. This is never going to be possible every time, but 80% would save a lot of human effort.
I'm only allowed one link per posting so this is it:
pdfinfo Linux manual page
This might get the title and authors. Look at the bottom of the manual page, and there's a link to www.foolabs.com/xpdf where the open source for the program can be found, as well as binaries for various platforms.
To pull out bibliographic references, look at cb2bib:
cb2Bib is a free, open source, and multiplatform application for rapidly extracting unformatted, or unstandardized bibliographic references from email alerts, journal Web pages, and PDF files.
You might also want to check the discussion forums at www.zotero.org where this topic has been discussed.
We ran a contest to solve this problem at Dev8D in London, Feb 2010 and we got a nice little GPL tool created as a result. We've not yet integrated it into our systems but it's there in the world.
https://code.google.com/p/pdfssa4met/
Might be a tad simplistic but Googling "bibtex + paper title" ussualy gets you a formated bibtex entry from the ACM,Citeseer, or other such reference tracking sites. Ofcourse this is assuming the paper isn't from a non-computing journal :D
-- EDIT --
I have a feeling you won't find a custom solution for this, you might want to write to citation trackers such as citeseer, ACM and google scholar to get ideas for what they have done. There are tons of others and you might find their implementations are not closed source but not in a published form. There is tons of research material on the subject.
The research team I am part of has looked at such problems and we have come to the conclusion that hand written extraction algorithms or machine learning are the way to do it. Hand written algorithms are probably your best bet.
This is quite a hard problem due to the amount of variation possible. I suggest normalizing the PDF's to text (which you get from any of the dozens of programmatic PDF libraries). You then need to implement custom text scrapping algorithms.
I would start backward from the end of the PDF and look what sort of citation keys exist -- e.g., [1], [author-year], (author-year) and then try to parse the sentence following. You will probably have to write code to normalize the text you get from a library (removing extra whitespace and such). I would only look for citation keys as the first word of a line, and only for 10 pages per document -- the first word must have key delimiters -- e.g., '[' or '('. If no keys can be found in 10 pages then ignore the PDF and flag it for human intervention.
You might want a library that you can further programmatically consult for formatting meta-data within citations --e.g., itallics have a special meaning.
I think you might end up spending quite some time to get a working solution, and then a continual process of tuning and adding to the scrapping algorithms/engine.
In this case i would recommend TET from PDFLIB
If you need to get a quick feel for what it can do, take a look at the TET Cookbook
This is not an open source solution, but it's currently the best option in my opinion. It's not platform-dependant and has a rich set of language bindings and a commercial backing.
I would be happy if someone pointed me to an equivalent or better open source alternative.
To extract text you would use the TET_xxx() functions and to query metadata you can use the pcos_xxx() functions.
You can also use the commanline tool to generate an XML-file containing all the information you need.
tet --tetml word file.pdf
There are examples on how to process TETML with XSLT in the TET Cookbook
What’s included in TETML?
TETML output is encoded in UTF-8 (on zSeries with USS or
MVS: EBCDIC-UTF-8, see www.unicode.org/reports/tr16), and includes the following information:
general document information and metadata
text contents of each page (words or paragraph)
glyph information (font name, size, coordinates)
structure information, e.g. tables
information about placed images on the page
resource information, i.e. fonts, colorspaces, and images
error messages if an exception occurred during PDF processing
CERMINE - Content ExtRactor and MINEr
Described in the paper: TKACZYK, Dominika, et al. CERMINE: automatic extraction of structured metadata from scientific literature. International Journal on Document Analysis and Recognition (IJDAR), 2015, 18.4: 317-335.
Mainly written in Java and available as open source at github.
Another Java library to try would be PDFBox. PDFs are really designed to viewed and printed, so you definitely want a library to do some of the heavy lifting for you. Even so, you might have to do a little gluing of text pieces back together to get the data you want extracted. Good Luck!
Just found pdftk... it's amazing, comes in a binary distribution for Win/Lin/Mac as well as source.
In fact, I solved my other problem (look at my profile, I asked then answered another pdf question .. can't link due to 1 link limitation).
It can do pdf metadata extraction, for example, this will return the line containing the title:
pdftk test.pdf dump_data output test.txt | grep -A 1 "InfoKey: Title" | grep "InfoValue"
It can dump title, author, mod-date, and even bookmarks and page numbers (test pdf had bookmarks)... obviously a bit of work will be needed to properly grep the output, but I think this should fit your needs.
If your pdfs don't have metadata (ie, no "Abstract" metadata), you can cat the text using a different tool like pdf2text, and use some grep tricks like above. If your pdfs are not OCR'd, you have a much bigger problem, and ad-hoc querying of the pdf(s) will be painfully slow (best to OCR).
Regardless, I would recommend you build an index of your documents instead of having each query scan the file metadata/text.
Take a look at iText. It is a Java library that will let you read PDFs. You will still face the problem of finding the right data, but the library will provide formatting and layout information that might be usable to infer purpose.
PyPDF might be of help. It provides extensive API for reading and writing the content of a PDF file (un-encrypted), and its written in an easy language Python.
Have a look at this research paper - Accurate Information Extraction from Research Papers using Conditional Random Fields
You might want to use an open-source package like Stanford NER to get started on CRFs.
Or perhaps, you could try importing them (the research papers) to Mendeley. Apparently, it should extract the necessary information for you.
Hope this helps.
Here is what I do using linux and cb2bib.
Open up cb2bib and make sure that clipboard connection is ON, and that your reference database is loaded
Find your paper on google scholar
Click 'import to bibtex' underneath the paper
Select (highlight) everything on the next page (ie., the bibtex code)
It should now appear formatted in cb2bib
Optionally now press network search (the globe icon) to add additional info.
Press save in cb2bib to add the paper to your ref database.
Repeat this for all the papers. I think in the absence of a method that reliably extracts metadata from PDFs, this is the easiest solution I found.
I recommend gscholar in combination with pdftotext.
Although PDF provides meta data, it is seldomly populated with correct content. Often "None" or "Adobe-Photoshop" or other dumb strings are inplace of the title field, for example. That is why none of the above tools might derive correct information from PDFs as the title might be anywhere in the document. Another example: many papers of conference proceedings might also have the title of the conference, or the name of the editors which confuses automatic extraction tools. The results are then dead wrong when you are interested of the real authors of the paper.
So I suggest a semi-automatic approach involving google scholar.
Render the PDF to text, so you might extract: author, and title.
Second copy paste some of this info and query google scholar. To automate this, I employ the cool python script gscholar.py.
So in real life this is what I do:
me#box> pdftotext 10.1.1.90.711.pdf - | head
Computational Geometry 23 (2002) 183–194
www.elsevier.com/locate/comgeo
Voronoi diagrams on the sphere ✩
Hyeon-Suk Na a , Chung-Nim Lee a , Otfried Cheong b,∗
a Department of Mathematics, Pohang University of Science and Technology, South Korea
b Institute of Information and Computing Sciences, Utrecht University, P.O. Box 80.089, 3508 TB Utrecht, The Netherlands
Received 28 June 2001; received in revised form 6 September 2001; accepted 12 February 2002
Communicated by J.-R. Sack
me#box> gscholar.py "Voronoi diagrams on the sphere Hyeon-Suk"
#article{na2002voronoi,
title={Voronoi diagrams on the sphere},
author={Na, Hyeon-Suk and Lee, Chung-Nim and Cheong, Otfried},
journal={Computational Geometry},
volume={23},
number={2},
pages={183--194},
year={2002},
publisher={Elsevier}
}
EDIT: Be careful, you might encounter captchas. Another great script is bibfetch.

Application (Not a Markup Language) for Producing a User Manual [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Can anyone recommend a program to create user manuals with? Not a markup language (like LaTeX or DocBook) but more something interactive like Scribus. As I'm not the only one that will update the manual the software should be something that's easy for a novice to pick up but still has some advanced features (like linking in text from external sources/tables, handling masterpages/themes etc.).
Regards,
Oscar
Technical Publishing Software - Views on FrameMaker and Its Alternatives
I've done spec documents with LaTeX and Framemaker, and designed a Framemaker workflow to support a team of 5 analysts producing a spec document for an insurance underwriting system. The document was expected to get to 2,000 pages or so. Many years ago (around 1992-1993) I also worked briefly as a typesetter.
Framemaker is designed for technical documentation and does it very well indeed. It also has features designed to support very large documents with multiple authors - people use this system to do documents with more than 100,000 pages. It is also more accessible than LaTeX to users familiar with word processing software.
Key features of Framemaker:
Documents consisting of multiple
files: You can pull together a
'Book' with multiple subsections in
different files. The document can
also be kept in source control.
Textual MIF format for
import/export: The importer is
somewhat finicky (I found generating
working LaTeX to be easier) but you can
generate items such as data
dictionaries and import them into
the document. The file has textual
anchors (see below) so you can
create cross-reference links that
will be stable across imports. I
find this to be a key feature for
specs as it allows cross-references
to link directly to generated items.
Powerful tagging, indexing and cross-referencing System: Everything
is based on tags in Framemaker and
it is easy to apply tags quickly.
This means that cross-referencing,
indexing, conditional text and
applying styles en-masse is easy and
just works. You can generate indexes and TOCs based on tags, so
having multiple specialised indexes
(such as a list of data field names
from screens or a data dictionary)
is easy to do. The document I
described above had 4 separate
indexes.
Stable: Framemaker is designed for
professionals so it doesn't second
guess you in the way that word does.
It is also much more stable on large
documents. Anyone who's tried to
write a document of more than 50-100
pages on Word should have a pretty
fair idea of what this implies.
Scriptable: FM has a C API and there
are various scripting plugins
(FrameScript and FMPython
being probably the most widely used)
which can be used to automate jobs
in FM. Framemaker 10 adds support
for a Javascript based scripting tool
called Extendscript, presumably
ported across from the scripting facility
in InDesign.
Single-sourcing: From a single FM
document you can produce PDF,
Windows Help (CHM), HTML and print
documents fairly easily. The
cross-references also resolve to
hyperlinks.
Global style controls: You can
easily set up styles for a document
and apply it across the whole
document. It also facilitates
running headers and footers with a
great deal of flexibility in having
them track sections, versions,
chapters etc.
Alternatives to Framemaker
LaTeX/Lout: You've already indicated
that you don't want a markup
lanaguage, but the TeX and
Lout systems are used for large
structured documents and do this
well.
Ventura Publisher: Probably the
only real alternative to Framemaker
if you want that sort of user interface
without paying bodily parts for the
privilege.
It has strong support for structured
documents and an XML-based document
interchange format. It's now owned
by Corel, who still appear to be actively promoting it.
There are a couple of other technical publishing tools on the market: Quicksilver (which used to be known as Interleaf) and ArborText. These two are powerful tools - Interleaf used to be the market leader in this field at one point - but quite expensive.
Adobe Indesign: Although Adobe
claim you can do large documents
with InDesign, the cross-referencing
and other large document features
tend to be viewed as lacking by
Framemaker afficionados. There is,
however, a text entry system for it
called InCopy that apparently
does have this sort of
functionality and quite
a large body of Third-party
plugins, some of which do
support tagging and other such facilities.
InDesign also has a scripting API and
a JavaScript interpreter for executing
scripts.
I haven't used Indesign,
so I can't really comment on how
well it works in practice.
DocBook: This is really just
a standard format for structured
documents but has a large ecosystem
of tools surrounding it for writing
and rendering documents. If you
don't want to use LaTeX you will
probably not want to use DocBook for
similar reasons. As Vinko Vrsalovic
points out (+1), This link goes to a StackOverflow
post from someone describing using
DocBook in practice.
I've never really used DocBook and I've
made so many edits to this post that it's now in Wiki mode, so
someone familiar with DocBook might
want to elaborate on this.
Word processing software: Word
has serious shortcomings as a
technical publishing tool and is not
recommended. OpenOffice has
somewhat better structured
documentation functionality than
word and may be a better choice if
politics or requirement to use .doc
as a document interchange format
preclude a better alternative.
Wordperfect is also
considerably better for
documentation-in-the-large than word
and still has a presence in several vertical markets
such as legal offices.
Madcap Software's Blaze and Flare: These
are new kids on the block and live
in roughly the same space as
Framemaker. The company was founded by former
eHelp (creators of RoboHelp) employees and is
actively developing, with multiple releases yearly. Their
offerings have greatly expanded in the past two years,
to the detriment of the quality of the individual products.
It seems focus has been on turning out new products and
by consequence there are a lot of "fit and finish" issues in
each. The authors have chosen to reinvent the wheel in many ways,
resulting in confusing and often broken implementations. Save often,
you will encounter unhandled exceptions. Source control integration
is flaky. For example, moving or deleting a group of files will result in
one source control commit for each file deletion. Big PITA when
you have source control email notifications. Hello 500 emails.
Flare can import Word and Framemaker files, but the import
is far from seamless. Expect to retain all of your content
but plan on completely re-styling from scratch.
Flare shares many of Word's tendancies to do too much behind
the scenes and assume what the user would choose. The HTML looks
like what Word outputs when you export HTML - lots of custom tags
and attributes, deeply nested inline styles, etc. The text
editor is maddening, for example, its cursor model is different
than any other software you've ever used.
Framemaker vs. LaTeX
These two are main systems I have used to produce large, presentable system documents and I've had good results with both.
Ease of Learning: TeX can give you absolute control but actually
achieving this on a complex LaTeX
document without breaking other
items isn't trivial, particularly
where a large number of macro
packages are involved. Basic LaTeX
isn't hard to learn, but making
modified versions of .sty files that
still work takes a bit of tinkering
if you're not a really deep TeX
hacker. It can be done but be
prepared to spend quite a lot of
time fiddling.
Framemaker can give you a good degree of control on the look of the document and isn't that hard to learn. Getting a house style and tweaking the layout (which you probably will have to do) will be easier with Framemaker.
Ease of Text Entry: You can use tools such as Lyx to provide a
wordprocessor-like front end to
LaTeX, and these work well if you
want to write large bodies of text.
Framemaker's DTP-like user interface
works in a way familiar to people
who are used to wordproessing
software. From this perspective
there is little practical
difference.
Templating Document Structure: Framemaker allows a document
structure to be defined in terms of
tags or an XML schema (if using
Structured Framemaker). LaTeX has a
set of canned structural elements
that are flexible enough to be
useful. Adding additional
structural elements (e.g. a data
dictionary item) can be done as a
macro, but making them auto-number
is a bit more challenging and you will
need to poke around behind the
scenes. Both can do it, but it's
considerably more technical to do it
in LaTeX in anything but trivial
cases.
Also, LaTeX does not have
the facility to template the
document structure in the way that
Structured Framemaker does.
However, you can achieve this type
of effect with DocBook and then
generate to LaTeX if desired.
Ease of Integration: I found making a generator for non-trivially
complex MIF files to be quite
fiddly. The MIF parser is quite
pernickety in FM and doesn't really
give good diagnostics. LaTeX
produces far better error messages
and is quite a bit less fussy.
Technical Publishing Software vs. Layout Software
Page layout software started with Pagemaker and the other main players in this space were its competitor Quark Xpess and now InDesign, with which Adobe is essentially trying to deprecate and replace it and Framemaker. Scribus, which you mentioned before, lives in the same space as these products.
If you are producing a manual with less than (say) 50-100 pages, one of the packages would probably do an adequate job. They are really designed for advertising and layout-heavy publication tasks such as magazines, so their support for large-document features of the sort found in Framemaker is fairly limited. The key issue with these products is scalability - they do not work well on large documents.
Just for reference I have actually typeset a 200-page book (someone's autobiography) using Pagemaker. While the fine-grained kerning and leading control helps a bit for copyfitting, it is still a highly manual process to lay out a book sized document. In this case the book was just straight text with no significant cross-referencing or structure other than chapters. Doing a complex technical spec document or manual this size with Pagemaker would have been very fiddly and probably next to impossible to get right without any mistakes.
Technical Publishing vs. Word Processing Software
This is more of a description of key shortcomings of MS-Word for large spec documents. However, it will illustrate some of the main features required for documentation-in-the-large:
Indexing and Cross-Referencing: This is a real chore in Word, and
quite unstable. Framemaker's
tagging features and LaTeX's labels
mean that you can assign a tag or
known label (in a predictable format
if necessary). The textual format
for the tag anchors is exposed in
the user interface, and is used for
the linkage. In Word, the anchors
are much more opaque and not
easily controllable in this way.
Combined with the clumsy user
interface and instability of the
product, this makes maintaining
these fiddly, and often unstable -
you often have to manually fix them
up.
Templated Layouts: Style support in word are quite basic and
numbering tends to be somewhat
unstable. FrameMaker is all about
driving from the tags and applying
styles based on the tags. Global
style changes just work in
Framemaker in a way that they do not
in Word.
Large multi-file Documents: I've never been able to make this work
well in Word, but it is a key
feature in Framemaker and LaTeX.
Again, Word's instability means that
you tend to spend a lot of time
tidying up after it. As the
document grows larger, the
proportion of time spent on this
work grows quadratically -
propensity for breakage proportional
to n (size of document) * time to
fix proportional to size n (time
to fix)
Why is Word so Unstable: Word does a lot behind the scenes to
support novice users and intervene
in layouts. It is also not really
frame-based (text flow conceptually
separate from document layout), but
the developers try to implement
various frame-like behaviours in the UI. When
the A.I. second-guesses you on a
complex document it often does the
wrong thing. Framemaker 'treats the
user as an adult' and does none of
this so things stay where you put
them.
Other word processors such as
Open Office and WordPerfect do not
misbehave in quite the same way as
Word, which is one of the reasons
that just about any word
processor other than Word will do a
better job of technical documents.
Pre-Flighting: In documentation-speak, this is the
process of checking that your
assemblage of files for the document
(image files etc.) is correct before
committing to print. The
professional systems will complain
about things that are wrong, giving
you a chance to correct it. Word
will just put on a happy face and
try to fix things behind the scenes.
A good example of this is a word
file with linked graphics. If you
copy the file and graphics to
another directory and update one of
the graphics in situ, word may well
still read the file from the old
path (I've seen it do this) and not
the new one you've just updated.
However, this behaviour is not consistent and
typifies the rampant abuse of
unstable heuristics in that product.
Pre-Press Support: A publishing system extends into the pre-press
phase of the workflow. This means
it covers preparation for print.
Word processing software tends not
to have this functionality or have
it in a very limited form.
Without getting too far into this, a key difference is that publishing software tends to treat you like a consenting adult and not get in the way when you want to scale or automate things. One can use word processing software for large scale documentation but it has many design decisions adapted to casual users writing short documents with little regard for quality. These adaptations come at the expense of fitness-for-task on large scale document preparation work. The main issues I find with Word for spec documents are the poor indexing and cross-referencing and general instability issues where I am always having to go back and fix things. However, political considerations in most environments (I'm a contractor) mean one is often stuck with it.
Some general comments on the state of technical documentation software
Framemaker would be the obvious choice if Adobe didn't keep giving off signals that they are trying to deprecate it and move its user base to InDesign. However, FM is widely used in aerospace, software and engineering circles and Adobe's management would face a lynch mob if they actually EOL'd the product without a credible migration path. From what one reads on the web, Adobe's acquisition of FM was driven by John Warnock, but he was ousted and FM became a victim of office politics. The net result is that it's been moved to maintenance mode and is quite stagnant.
Ventura Publisher has also been relegated to a niche market to some extent, but at least Corel do not have two competing product lines in the way that Adobe do. It is probably a passable substitute for FM and may be more politically acceptable to PHB types as it is marketed as a 'business publishing' system.
Quicksilver and Arbortext both seem to be viable products, but are very expensive. I've not used either, so I can't really make any real judgement on their merits.
The markup language systems are free and very powerful in many ways. Lout might be a bit easier to work with as it doesn't have quite the level of legacy baggage that LaTeX does. DocBook is also quite widely used and does have quite a bit of tool support. These technologies put a significant squeeze on the 'geek' end of Framemaker's market share and do so on their merits - they have probably taken quite a chunk out of Adobe's profit margins over the years. I would not dismiss these technologies out of hand, but they will be harder to learn in practice.
You might try evaluating InDesign and a selected set of plugins (concentrate on those for tagging and cross-ref/index management). Finally, some of the word processing software (Wordperfect and OpenOffice) give you a reasonable toolkit for structured documentation and work considerably better for this than MS-Word.
PostScript
Yes, that is a pun. I haven't touched on Pre-Press functionality of any of these products. Printing and Pre-Press are technical fields in their own right and the scope for expensive mistakes means you should probably leave this up to specialists.
Framemaker, InDesign, Ventura, QuickSilver, Arbortext and (presumably) the MadCap products all come with facilities to do pre-press preparation. By and large, word processing software does not.
Doing pre-press with LaTeX tends to involve post-processing the PS output with software like psutils or rendering to PDF and taking the pre-press workflow from there. Generally, most pre-press houses can work from PDF, so a good PDF writing tool like Distiller is the best interface for work prepared from tools that are not designed for prepress work. Note that the quality of the output from Distiller tends to be better than the Ghostscript based ones like PDFCreator.
Note that the RGB colour space of a monitor does not have a direct map to a CYMK colour space used by a printing press. Actually getting colours - especially colour photos - to come out correctly on a press is somewhat fraught if you do not have the right kit. For print production, see a specialist unless you have reason to believe you know what you're doing. For a casual user I would still recommend this 15 years after I was involved in the industry, as mistakes are very expensive to fix once they're committed to print.
If you really do want to do colour print work in-house, you will probably need to calibrate your monitor. For best results, you should get a high-fidelity monitor like this one from HP. In order to calibrate the monitor you may also need a sensor like one of the ones described in this review if the monitor does not come with one. Most professional graphics cards like these from Nvidia, AMD or Matrox have facilities to support gamma correction; many consumer ones do as well. You will also need to get calibration data for the press you are going to be using to print, although the pre-press house will probably be able to do this.
As stated before, print media is quite technical in its own right, easy to get wrong and expensive to fix once it's gone to print. If you're not 100% certain you've got your calibration right, get a colour proof like a Chromalin. This is done from the actual film separations (and is thus quite expensive), so it gives an accurate rendition of the actual colour of the final printed article. Doing this for a few sample pages will give you accurate feedback about whether your calibration is set up right.
Acknowledgements: Thanks to Aidan Ryan for expanding the section on Madcap products.
I would recommend "Help & Manual" from EC Software. You can create a printed manual, PDF, Windows help file (CHM), and HTML web based help from a single source document.
I've heard good things about FrameMaker. I've not used it myself, but have had it recommended to me for just such an application.
Adobe Framemaker indeed is the classic tool for writing user manuals. I've used it for all kinds of long documents, and it works very well. Too bad that Adobe left it to rot for years, before noticing that users wouldn't switch.
MSWord took till 2003 to get the bullet/numbering bugs out, and I don't know if they finally got master document working.
LaTeX still is a reasonable alternative. The format is easy to process, and you could generate it from a wiki.
If you want collaboration, then a language-based approach (LaTeX would be my preference although XML-based ones are also good -- Docbook being the flagship here) does make sense, especially if you are tracking files with a version control system.
Anything that does complicate things like any software with a binary or proprietary format will not help you here.
Sorry if it is not the answer you want.
I agree with Ollivier that using DocBook (or LaTEX) is the sanest approach to have easy conversion, sane formatting, nice version control.
Happily, you can try to have your cake and eat it too with a DocBook editor.
Try the ones on this list and see if any satisfies your needs (I haven't used any).
We are using "Help & Manual" from EC Software and it works quite well. Our authors are spread through the U.S. so we share our content files via a hosted SVN server to manage version control. On each workstation we use Tortoise SVN to stay in sync. The product is extremely easy to use and productive.
A VERY nice explanation on what O'Reilly (actually the ones selling all these books...) uses:
O'Reilly Toolchain
It may seem complicated, but depending on the amount of pages you are going to write you maybe should put some consideration into it.
Word (or your favorite word processor)
I make all my user manuals (not to be confused with user HELP files) in Word. Then I can determine if they need to be in PDF, RTF, DOC or even converted to HTML. To solve the multi-user updating issue, I store the file in Source Control which handles all those fun things.
See the Fastware Project blog for an in depth discussion of the tradeoffs of using DocBook etc. Scott Meyer has tried out a lot of possibilities and shares what he's thinking.
Adobe InDesign CS5.5 is much better at cross references and long documents than earlier versions. It is very powerful and relatively easy to learn and use. The feature set is very rich and the more you learn about it the more you can do with it. It supports very powerful XML features and can import and export XML as needed. It can also map Styles to Tags and Tags to styles allowing you to create your XML in an automated fashion if you simply use a full set of character and paragraph styles. I have used the program for years and produced multiple projects from books to one-off advertisements. It is a graphic design tool, but has support for many aspects of book and manual production. I recommend it if you are more concerned with graphics, images or illustrations. InDesign support a wide number of import and export formats.
InDesign CS5.5 has added and improved support for both interactive content and export for EPUB (electronic book) and Adobe's Digital Publishing Suite (DPS) electronic magazine formats.
Framemaker is an excellent tool for books, manuals and long technical documents. It is a bit harder to learn than InDesign but has a richer set of tools for building variables and running headers and footers, if you have the time and inclination to learn how to use them. It also has a very robust XML feature-set, but I have not used it personally.
Unfortunately, Framemaker suffers from lack of support for graphic design. The color system is based very kludgey and spot (PMS) colors are hard to define. Simple things like adding a stroke color and fill color are rudimentary at best. For example, you still can't select a stroke color that's different from an objects fill color. The program is intended to output to laser and inkjet printers and not really to printing presses.
One feature that is really cool is the ability to apply master pages based on the Paragraph styles appearing on the page. The paragraph/illustration numbering in Framemaker is superior to any other program that I have ever used. But it is also difficult to learn and use.
Both programs support output to PDF and PostScript file formats and can generate hyperlinks and interactive content.

Company insists on using a binary format for all our documentation [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I work at a company that, for some reason, insists that all our development documentation should be in MS Word format. Which, being a binary format, means we cannot:
Diff versions of a document against each other (so peer reviewing them is a pain - because of the domain we work in, peer reviews for all changes are essential)
Grep a folder-full of documents for keywords
What do you use to write documentation in and why?
Please also give me ammo to change this situation with...
I recently started using DocBook XML to author my documentation.
On the upside, it's a pure text format. You can break a large document into multiple files, and use nodes to bring them all together into a single book. Table of contents and index are automatically generated. Intra-document links (within arbitrary text, pointing to chapters or sections) are very easy. And with a push of a button, I can create a single-html-file version, a chunked-html version (one file per chapter), and a PDF version.
After some tweaking and customization, I'm very happy with the output. The documents look great!!
DocBook is used extensively by real publishers (most notably, O'Reilly), and it's been around for more than fifteen years, so it's reached a certain level of maturity.
On the other hand, all of the processing is done with XSLT, using an ad-hoc collection of tools. (My own docbook pipeline includes Python, Java, Xerces, Xalan, Apache FOP, and PDF-SAM. Plus the official XSLT stylesheet distribution, and my own XSLT customizations.)
DocBook is not a turnkey solution. You won't be able to get going quickly, without reading the manual. And if you don't know anything about XSLT, you'll have to learn.
On the other hand, there are only a dozen or two XML tags that you really need to know to write the documents. (The real expertise comes into play during doc generation from the XML sources.) If one person on your team was willing to be responsible for writing the doc build script, then everyone else on the team could just learn the DTD and do a decent job contributing.
Anyhow... DocBook definitely has some faults. It's not the easiest system for tech authorship. But it's the best open source tool I know of.
The "Subversion Book" is written in DocBook. Here's a page with links to the different book versions (single-html, chunked-html, and PDF):
http://svnbook.red-bean.com/
And here's a link to the DocBook XML sources for the first chapter, so that you can get an idea for how it works:
http://sourceforge.net/p/svnbook/source/HEAD/tree/branches/1.7/en/book/ch01-fundamental-concepts.xml
For ammo, there's the trusty old Pragmatic Programmer, chapter 14: The Power of Plain Text.
As Pragmatic Programmers, our base
material isn't wood or iron, it's
knowledge. We gather requirements as
knowledge, and then express that
knowledge in our designs,
implementations, tests, and documents.
And we believe the best format for
storing knowledge persistently is
plain text. With plain text, we give
ourselves the ability to manipulate
knowledge, both manually and
programmatically, using virtually
every tool at our disposal.
We use a wiki (specifically the one provided by Trac) for the two reasons you mentioned. Plus, if we really need to we can get the text version of the markup and manipulate it in a text-only environment, too (e.g. as part of svn comments during commit).
A format that can be easily reduced to text-only (non-binary) is definitely a must. Having the ability to upconvert it to a pretty format like a PDF is, for us, not terribly important.
Word has change tracking for documents (although it only works up until you accept the changes) and you can also grep them (the text isn't encrypted). So I'm not sure either of your arguments will hold up under scrutiny. I'd love to give you the ammo to change this but I've become jaded and cynical with age.
We use MS Word for our docs (which is a huge improvement over the earlier choice (Lotus WordPro - ugh!).
We use a wiki - specifically Confluence by Atlassian.
It's a commercial product, and it's great. One of the reasons we picked it over free/open wiki engines is that it has a full-blown WYSIWYG editor and various other features that make it more easily accessible to users who are familiar with Word.
We've also come up with a neat trick where we store images, designs, wireframes, etc. in Subversion, and then embed links in the wiki documents to those resources URLs via the Apache/SVN web interface module; notes on how we do this are here if you're interested.
Like Dylan's organisation, we also use the excellent Confluence wiki. I wrote an article about why this is better approach called Wiki is my word-processor, which should give you some reasons to change the situation.
Benefits of using a wiki for internal documentation include the following.
Word-processor users get sucked into changing the layout and typography, however good your templates are, which wastes time and reduces consistency.
A wiki provides full-text search, which you are unlikely to have for your body of the MS Word documents written by everyone.
A wiki provides a document version history; I have never heard of a team successfully keeping all revisions in Word documents and always being able to compare old versions, or using a version control system (with the possible exception of SharePoint but that's whole different failure scenario).
A wiki makes hyperlinks between documents easy; it is too hard to reliably link between documents in a collection of Word documents, so new documents end up duplicating older content into new monolithic documents which means they take more time to read and write.
Separate wiki pages can be edited by different people at the same time, and Confluence can merge changes when multiple people edit the same page at the same time; collaboration is harder with a Word document that only one person can edit at a time.
A wiki like Confluence automatically generates navigation pages based on wiki structure and tags; you need a librarian and lots of discipline to make it possible to browse a large collection of Word documents.
A wiki page usually loads and displays more quickly than a Word document.
A wiki page has more automatic meta-data; you need templates and discipline to make sure that Word documents always have Title, Author and Version set in the document properties and visible in the document on-screen and in print.
If you want more ammunition than this, then there is lots of wiki-promotion on The Atlassian Blog.
You could ask for documentation to be in OOXML (.docx, in the case of Word) format. Not as ideal as using ODT, in my opinion, however, it's still just a zip file with a bunch of XML files inside. :-)
A textual format facilitates merging your documentation with generated items such as JavaDoc, API references or data dictionaries. It also scales much better than word, which is hard to use for large documents. Finally, a format that allows includes allows multiple authors to work on a document concurrently.
LaTeX and FrameMaker (the two systems I have used for this) both have vastly superior indexing and cross-referencing capabilities and have either a native textual format or a textual version of their native format that can be included (MIF in the case of Framemaker). They are also both much more stable than word.
I've built tools that read data dictionaries and generate documentation that can be included into a larger document with stable indexing and two-way cross-referencing. The functional specification for This product was done with LaTeX in this way and got me another gig with the company. I have also developed a similar process with FrameMaker.
Is the entire development team against this requirement, or is it a small group? If it's the entire team, just ignore the mandate and use a text-based format -- wouldn't be the first time employees ignored a silly rule. Works especially well if you've not made a big fuss about it in the past. If you have, management might look especially hard at your docs.
MS Word supports document changes tracking and peer review.
The new MS Office format is fully XML based (to see this, rename a MS Word .docx file to a .zip, then unpack it to see).
Maybe Office 2007 may fit both your company requirements and your concerns ?
You can at least compare Word documents, see the "Track changes" command in the "Extra" menu, or use software like DeltaView. Found via google search first link at lifehacker.com. Searching in word documents should be possible with Google Desktop Search or other similar programs that index all files they are able to read.
Do they insist that you write it in Word or only that it's available in Word format? You could write in a text format and convert it to Word automatically.
Don't you store documentation files in some kind of Version Control System, ideally together with the source code? I would recommend to do this (makes it easy to get the documentation for old software releases).
And if you do store the docs in VCS, you will notice that plain text or XML-bases files are much better for this, because you can get diffs; also, changes between text files are usually stored more efficiently than changes between binary files.
Not to defend MS products here, but MS word can diff documents.
If you use Beyond Compare as the diff tool for your source-control system (As we do, with Perforce), it will show you differences between revisions of your Word docs. Admittedly, it only shows the textual differences - formatting changes are not shown - but this is usually enough for you to see what changed.
This is just another reason to invest in Beyond Compare, as it is one of the most polished pieces of software I've ever used - and it's the best $30 dollars (Less if you buy several) I've spent on software
There are many tools for word document comparison. I currently use a python script that puts a command-line on the built-in compare and merge functionality of word.
http://nicolas.lehuen.com/index.php/post/2005/06/30/60-comparing-microsoft-word-documents-stored-in-a-subversion-repository
It should be easy to automate word to extract all text from a word document into a text file. So you could write a script creating text files from word docs, and grep, compare, version control, Review these text files.
Of course this is not an ideal solution, since you loose your pretty formatting, but it should work.
I think there are programmes that convert Word docs to plain text. Use one of them to convert the word doc to plain text and then use diff, grep etc
Also have a look into recommended toolchain(s) for DocBook.