Reuse texinfo in doxygen [closed] - documentation

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Are there any tools for converting texinfo files into something Doxygen can process (presumably markdown)? I have a bunch of old texinfo files that I'd like to link in with the doxygen docs we have. I guess I'll generate html from the texinfo and link to that from doxygen source files if I have to, but I'd rather integrate the texinfo docs into the doxygen ones.

I have been struggling with this on and off for legacy documentation on my project. I've found a solution that, while not completely automated, is workable. Since Doxygen can process markdown files, I've been converting my texi files into mardown.
Convert the texi file to html via texi2html
$ texi2html foo.texi
Convert the html file to markdown via pandoc
$ pandoc -f html -t markdown -o foo.md foo.html
Clean up the resulting markdown file with your markdown editor of choice. There are a plethora of them, but on OSX I use Markdown Pro.
Edit your Doxyfile and tell Doxygen to process the markdown files in the directory. Either add the file to your INPUT list, or add the .md extension to the FILE_PATTERNS tag. More information about Doxygen's markdown support may be found here:
http://www.doxygen.nl/manual/markdown.html

Related

Change options of already used latex package with doxygen

I'm using cmake with doxygen to generate documentation based on some markdown files. There are some special characters in those markdown files that I can't control (add by users and I just run my script). I found a solution by using the option utf8x instead of just utf8 with the inputenc package and use ucs package as well.
So, I added the following line into CMakeLists.txt file:
set(DOXYGEN_EXTRA_PACKAGES ucs "[utf8x]{inputenc}")
Which lead to having the following lines in refman.tex file:
% Packages requested by user
\usepackage{ucs}
\usepackage[utf8x]{inputenc}
The problem is, there is already \usepackage[utf8]{inputenc} in the file which lead to an error on generating the pdf file using make pdf.
I can solve that by have a special bash/python script that replaces \usepackage[utf8]{inputenc} with whatever I want, but it seems strange to me that there is no way to make doxygen replaces/removes aready used packages as there is already EXTRA_PACKAGES which can be used to add extra packages.

Anguar node_modules: how to cleanup? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Having an Angular project I have a node_modules directory in my project dir.
That is pretty full with all the files of the modules I use.
I like to periodically save the project folder for backup. Doing this takes a bit of time because of node_modules.
Is it a bad idea to remove nodes_modules before backup and then after doing more coding rebuild it with
npm install
?
Or maybe theres a better way to have smaller backups?
EDIT
I do git and also this directory-backup. My question is regarding the directory-backup only.
Your package.json file act as a blue print for your required node modules with versions of every node module being used in the project, hence keeping a back up of node_module doesn't make sense, as you can get it back with a npm install on your project anytime
If you are using Git, you can ignore node_modules by adding the following in .gitignore file
# dependencies
/node_modules
node_modules and package-lock.json should not be backed up it should be installed when used as all data is present in package.json
Please use a version control system shuch as git instead of manual backups
Check this link for better understanding
https://git-scm.com/book/en/v2/Getting-Started-About-Version-Control

.gitignore node_modules/ not working for fd and ripgrep [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
I'm trying to setup fzf.vim for vim on windows 10.
You can use an alternative find command like ripgrep or fd which is supposed to respect .gitignore.
My .gitignore file has this line, which is working fine for git commits, etc.:
node_modules/
My dir structure is
/working directory
.gitignore file
.git dir
/node_modules dir
When I run
fd --type f
or
rg --files
It lists all files in node_modules.
I feel like this may be a windows problem.
How can I get these programs to use .gitignore to ignore node_modules?
Turns out, I was using a project where the git repo had not yet been initialized. So I had a .gitignore, but did not have a .git folder. And by default ripgrep needs it to be an actual git repository to utilize the .gitiginore folder. My solution was to use the following flag -
--no-require-git
More info here

Latest (master/snapshot) Spark documentation - either online or run locally [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Is there a location where the latest spark documentation has been built and is available?
For example the 1.3.0 release candidate branch was cut five days ago but it is not available from the apache site - the newest is the already-in-production 1.2.0.
Even better would be the output of an Amplab Jenkins build. But maybe someone just publishes it regularly on a publicly accessible location?
Alternatively what is the procedure to generate html from the Spark markdown's? I can easily put up a local webserver to serve them.
Nightly publication of documentation snapshots and build artifacts is on the Spark roadmap; see https://issues.apache.org/jira/browse/SPARK-1517.
For the SECOND part of my question - how to generate the docs locally - the docs/README.md does have instructions.
The result is shown here;
And here we are (notice localhost:4000 and Spark version 1.3.0 - which is not released yet)
The instructions are copied here:
The markdown code can be compiled to HTML using the Jekyll
tool. Jekyll and a few dependencies must be
installed for this to work. We recommend installing via the Ruby Gem
dependency manager. Since the exact HTML output varies between
versions of Jekyll and its dependencies, we list specific versions
here in some cases:
$ sudo gem install jekyll
$ sudo gem install jekyll-redirect-from
Execute jekyll from the docs/ directory. Compiling the site with
Jekyll will create a directory called _site containing index.html as
well as the rest of the compiled files.
You can modify the default Jekyll build as follows:
# Skip generating API docs (which takes a while)
$ SKIP_API=1 jekyll build
# Serve content locally on port 4000
$ jekyll serve --watch
# Build the site with extra features used on the live page
$ PRODUCTION=1 jekyll build
Pygments
We also use pygments (http://pygments.org) for syntax highlighting in
documentation markdown pages, so you will also need to install that
(it requires Python) by running sudo pip install Pygments.
To mark a block of code in your markdown to be syntax highlighted by
jekyll during the compile phase, use the following sytax:
{% highlight scala %}
// Your scala code goes here, you can replace scala with many other
// supported languages too.
{% endhighlight %}
Sphinx
We use Sphinx to generate Python API docs, so you will need to install
it by running sudo pip install sphinx.

How to avoid automatic appended file extensions to directory links when converting to pdf using pandoc?

I'm writing company internal documentation in R markdown and compiling using knitr in Rstudio. I'm trying to add a link pointing to a directory as follows:
[testdir](file:////c:/test/)
(this is following the convention described in here)
When I compile it to html, I get the following link.
testdir
and it works as expected in Internet explorer. However, when I try to convert to pdf straight from RStudio, an unwanted pdf extension is appended to the link. I tried dissecting the problem and it seems this change is happening within pandoc. Here are the details.
When I convert it to latex using pandoc,
>pandoc -f markdown -t latex testing.md -o test.tex
the link in the latex output file looks as follows:
\href{file:///c:/test/}{testdir}
Everything good so far. However, when I convert the latex output to pdf with pandoc,
>pandoc -f latex -t latex -o test.pdf test.tex
a .pdf extension is appended to the link. Here is a copy/paste of the pdf link output:
/c:/test/.pdf
is there a way to avoid this unwanted appended extension?
Perhaps I'm asking too much of pandoc, but I thought it might be worth asking since RStudio is becoming such a useful IDE to write my dynamic documents.
As you said, the .tex file pandoc generates is fine. So the problem is actually with LaTeX, specifically with the hyperref package which is used in pandoc's LaTeX template.
The problem with two possible solutions was described here. To prevent hyperref from being smart and adding a file extensions, try:
[testdir](file:///c:/test/.)
Or use ConTeXt instead of LaTeX:
$ pandoc -t context -s testing.md -o test.tex && context test.tex