Why does Doxygen still executed pdftex despite `GENERATE_LATEX=NO`? - documentation

I've a reasonable large C/C++ project and want to generate only a HTML version of the documentation by Doxygen. In my Doxyfile I've written
GENERATE_LATEX = NO
In deed, there is no latex directory in the specified output directory; just html. However, I'm getting output by pdfTex on stderr:
...
This is pdfTeX, Version 3.1415926-1.40.11 (TeX Live 2010)
restricted \write18 enabled.
entering extended mode
(./_formulas.tex
LaTeX2e <2009/09/24>
...
Output written on _formulas.dvi (279 pages, 49376 bytes).
Transcript written on _formulas.log.
...
Why?
Doxygen's installation instructions list LaTeX as an optional (additional) tool, which is not required. Thus I assume, it is not required for basic functionality of HTML generation.
How can I make Doxygen not execute pdfTex? (no, I do not want to uninstall *TeX on my machine)

The file _formulas.tex is created as the result of using formulas.
If you want formulas without the need for LaTeX you can set USE_MATHJAX to YES.

Related

In MkDocs, how to include script-generated content?

How can I include script-generated content into documentation built by mkdocs (or mike)?
For example, I'd like to present a table of the command-line arguments of a program, and currently the most complete listing (and descriptions) for those arguments (as supported by a given version of the program) is the output from the program itself (invoked with the --help flag). Similarly as for API documentation, if I was to manually transcribe into a markdown file then there would be a maintenance burden to prevent drift between the docs and codebase. Do I need to pre-orchestrate a script that generates entire markdown files, or is there a way to embed shell commands within the docs source (to be evaluated automatically when the docs are built)?

how to include pdf 1.6 files in sphinx

We use libreoffice to generate pdf-figures from odg files (automatically via makefile) and include these in documentation generated by sphinx that via LaTeX ultimately produces a pdf file. This works nicely. However, starting with libreoffice 7, libreoffice generates pdfs with version 1.6. And pdflatex as used by sphinx (4.1.2) only accepts PDFs up to 1.5. producing warning messages such as
PDF inclusion: found PDF version <1.6>, but at most version <1.5> allowed
That would easily be fixable by including \pdfminorversion=6 early in the generated LaTeX file. However, putting it (first) in the preamble via conf.py is too late
! pdfTeX error (setup): PDF version cannot be changed after data is written to the PDF file.
Are there any other legal means to just insert raw LaTeX early (without resorting to scripted file manipulation)? Or do you have any other hints on how to specifiy the pdf version that gets produced by LaTeX/sphinx and thus get rid of the warnings. I know, just warnings, but these things tend to become errors sooner than one might think...
First of all, some of the answers to this question might be useful if you definitely want to upgrade to PDF version 1.6.
On the opposite, if the actual PDF version of your figures is not an issue, you can also manually force the PDF's to insert to be of version 1.5, using Ghostscript in the following command taken from here:
gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.5 -o output.pdf input.pdf
That way, you make sure avoiding inserting any instability issue, messing up with LaTeX configuration. (even though setting up \pdfminorversion=6 should be fine by now)

Changing the output language of gpg2.exe

I am automating a process and I use GPG2.exe for it.
Because I need to parse console output - potentially from different systems I need to set the languge to a controlled value.
I am following the Instructions from the manual which states that
LANGUAGE
Apart from its use by GNU, it is used in the W32 version to override the
language selection done through the Registry. If used and set to a valid and
available language name (langid), the file with the translation is loaded from
gpgdir/gnupg.nls/langid.mo. Here gpgdir is the directory out of which the
gpg binary has been loaded. If it can’t be loaded the Registry is tried and as
last resort the native Windows locale system is used.
I found a thread from 2011 that goes into a bit more detail regarding this problem, but this may actaully concern a different version.
I created a batch file for manual testing.
#echo off
REM C is meant to display untranslated messages according to one internet source
set LANGUAGE="C"
call "C:\Program Files (x86)\GNU\GnuPG\gpg2.exe" --version
pause
I ecpext the output to be english but it is still german.
The manual states something about there beegin a "gnupg.nls" folder somewhere.
I was not able to locate this folder, which makes me wonder where german is loaded from.
Is there an error in the man page?
The pdf Version of the man page shows the same content as the man page that came with the installation.
Can someone shed some light on this?
I had the same problem that the output was in Swedish though I wanted it in English. The Windows display language was set to English, and I also tried setting environment variables but what solved it for me was to remove the Swedish translation for gnupg file found here:
C:\Program Files (x86)\gnupg\share\locale
After having removed the "sv" directory all output was in English.
The language directory can be pulled from the registry or I suppose it can also have a fixed path since I can not find the information in my registry.
On my testsysten the path is 'C:\Program Files (x86)\GNU\GnuPG\share\locale'.
This path contains folders for each language - not all of them contain translation files for gpg2 as far as I can tell.
The environment variable for language is not LANGUAGE but LANG. Setting it to C causes gpg2 to default to english.
I successfully tested the following call.
#echo off
set LANG=C
call "C:\Program Files (x86)\GNU\GnuPG\gpg2.exe" --version
User bignose is still correct when he states that I should use the API instead but within my current restrictions I do not see a straight forward way to do so.
This isn't a programming question, as noted in the votes to close.
To answer a different question that could be asked from a programming perspective: Don't parse inherently-changeable console output, when there is a well-defined library API for the same functionality.

doxygen not producing output from input filter (doxyqml)

I'm trying to use doxyqml to produce QML documentation through doxygen, but the documentation pages are not being created.
As per the doxyqml documentation I've added a *.qml entry to FILE_PATTERNS, and added *.qml=doxyqml to FILTER_PATTERNS (doxyqml is available from /usr/bin so just calling doxyqml on the command line is sufficient to launch it).
From the doxygen output I can see that the *.qml file pattern is working as the files appear in the 'Reading' stage of the output - but not the parsing stage. If I add a #define or some other non-QML statement to the files then a doxyqml error appears in the doxygen output, so I know that doxyqml is being called correctly.
I also know that the doxyqml output is correct because if I copy the output from calling doxyqml directly with one of the qml files, and paste it into a *.h file, doxygen builds the documentation for it.
It's almost as if doxygen is just not reading the output from doxyqml. Has anyone else had this experience? I'm using doxygen 1.8.8 and the latest doxyqml codebase (7th July 2014).
It seems to be because Doxygen uses the file extension to work out what parser to use to analyse the text, and because *.qml is new to it, it was guessing wrong (though I don't know which it was attempting to use).
The solution was to tell Doxygen which parser to use for QML files, so I just needed to add qml=c++ to EXTENSION_MAPPING, and then everything worked as expected.

.h generated from .h.in?

There are struct definitions in the .h file that my library creates after I build it.. but I cannot find these in the corresponding .h.in. Can somebody tell me how all this works and where it gets the extra info from?
To be specific: I am building pth, the userspace threading library. It has pth_p.h.in, which doesn't contain the struct definition I am looking for, yet when I build the library, a pth_p.h appears and it has the definition I need.
In fact, I have searched every single file in the library before it is built and cannot find where it is generating the struct definition.
Pth uses GNU Autoconf, Automake, and Libtool. By running ./configure you'll be running a shell script which eventually runs m4 to detect the presence of a whole bunch of different system attributes and make changes to a number of files.
It looks like it boils down to ./configure generating Makefile from Makefile.in and then running something via make that triggers the shtool subcommand scpp:
pth_p.h: $(S)pth_p.h.in
$(SHTOOL) scpp -o pth_p.h -t $(S)pth_p.h.in -Dcpp -Cintern -M '==#==' $(HSRCS)
Obscure link, but here's an shtool-scpp manpage, which describes it as:
This command is an additional ANSI C
source file pre-processor for sharing
cpp(1) code segments, internal
variables and internal functions. The
intention for this comes from writing
libraries in ANSI C. Here a common
shared internal header file is usually
used for sharing information between
the library source files.
The operation is to parse special
constructs in files, generate a few
things out of these constructs and
insert them at position mark in tfile
by writing the output to ofile.
Additionally the files are never
touched or modified. Instead the
constructs are removed later by the
cpp(1) phase of the build process. The
only prerequisite is that every file
has a ``"#include ""ofile"""'' at the
top.
.h.in is probably processed within a configure (generated from configure.ac) script, look out for
AC_CONFIG_FILES([thatfile.h])
It replaces variables of the form #VAR# in the .in file with their values.
Edit: Just noticed if I'm right you should retag your question