How to create a PDF-out-of-Sphinx-documentation-tool - pdf

Followed this link to try and generate pdf from Sphinx:
https://www.quora.com/How-to-create-a-PDF-out-of-Sphinx-documentation-tool
$ sphinx-build -b pdf source build/pdf
Error: Cannot find source directory `/Users/seb/mydocs/source'.
$ make all-pdf
make: *** No rule to make target `all-pdf'. Stop.
$ make pdf
make: *** No rule to make target `pdf'. Stop.
Since tried in OSX:
$ conda install -c dfroger rst2pdf=0.93
Fetching package metadata .........
Solving package specifications: .
Error: Package missing in current osx-64 channels:
- rst2pdf 0.93*
You can search for packages on anaconda.org with
anaconda search -t conda rst2pdf
EDIT:
After pip install rst2pdf
install rst2pdf
register rst2pdf in your conf.py Sphinx config
extensions = ['sphinx.ext.autodoc','rst2pdf.pdfbuilder']
But adding 'rst2pdf.pdfbuilder' causes
Extension error:
Config value 'math_number_all' already present
make: *** [html] Error 1
$ sphinx-build -bpdf sourcedir outdir
But what do I specify as sourcedir and outdir? Example please.
EDIT:
Now after make html
and then:
$ rst2pdf index.rst output.pdf
index.rst:14: (ERROR/3) Unknown directive type "toctree".
.. toctree::
:maxdepth: 2
introduction
tutorial
multiple_jobs
deployment
project
index.rst:26: (ERROR/3) Unknown interpreted text role "ref".
index.rst:27: (ERROR/3) Unknown interpreted text role "ref".
index.rst:28: (ERROR/3) Unknown interpreted text role "ref".
Also:
$rst2pdf.py index.rst -o mydocument.pdf
Does produce a mydocument.pdf but completely different from html and toc to all the pages are not even there?
Image of pdf verse HTML same page

This is from the official Sphinx documentation. If you have pdfTex tool installed in your machine, all you need is:
$ make latexpdf
Then, the generated pdf file(s) will be under _build/latex/<PROJECT-NAME>.pdf
So, the complete process from scratch would be as follows:
$ pip install -U sphinx # install the package
$ sphinx-quickstart # create a new project (answer the questions)
$ make latexpdf # compile and generate pdf file
Note that you may also "optionally" install whatever extensions needed by editing the file config.py
NOTE: This answer assumes LaTeX engine is installed on your machine.

I have succeeded in generating a PDF file for the DevStack document by following the configuration changes in this link:
Here are the steps:
Edit your conf.py (edit or append values)
extensions = ['rst2pdf.pdfbuilder']
pdf_documents = [('index', u'rst2pdf', u'Sample rst2pdf doc', u'Your Name'),]
Install the "rst2pdf" if necessary
pip install rst2pdf
Build the PDF file like this:
sphinx-build -b pdf doc/source doc/build

Succeeded in Pdf Generation via Latex (for windows 10)... No need to change existing conf.py file in Sphinx... the best solution is install MiKTeX....install Perl (ActiveState).... after when running sphinx type 'make HTML' type 'make latex' and then make latexPdf... this solved my issue.

You could avoid rst2pdf and use make pdflatex to build pdf output via a latex file.
cf more info:
https://docs.typo3.org/typo3cms/extensions/sphinx/AdvancedUsersManual/RenderingPdf/

Make sure to have a look at http://rst2pdf.ralsina.me/handbook.html#sphinx.
Following applies to Ubuntu 16.
But it is probably less painful to install a full LaTex suite than to try to get this tool running; it is very sensitive to errors, and is difficult to use.
I took a look there (https://tex.stackexchange.com/a/95373) and it looks daunting...
this is more encouraging: https://milq.github.io/install-latex-ubuntu-debian/
And I did it:
sudo apt-get install texlive-full
make clean latexpdf
is worthwhile your patience (and disk space) to install. I got rid of rst2pdf.

Found this related post which discovers that rst2pdf breaks math rendering in Sphinx 1.4.1 because it imports a dummy math module.
In rst2pdf.pdfbuilder:setup(), it calls mathbase.setup() internally to install dummy math module. It causes conflicts and raise errors.
I had this same error when running sphinx-build -b pdf
Extension error:
Config value 'math_number_all' already present
Now looking at latex option, and exporting to pdf as #tfv posted. pdflatex is a larger universe of TeX distribution.

As an alternative to latexpdf: rinohtype
Once rinohtype is installed, sphinx-build -b rinoh . _build

Create a file documentation.md, and open it in Typora software. Navigate to the Sphynx documentation that you need, mark with mouse and copy inside Typora. Then inside typora export as PDF. Done.

Related

SOLVED - Ubuntu 18.04 setting up virtualenvwrapper, python 3.8

Original question:
I am installing virtualenvwrapper on Ubuntu 18.04. Here is what I have tried so far:
From https://virtualenvwrapper.readthedocs.io/en/latest/#:~:text=virtualenvwrapper%20is%20a%20set%20of,introducing%20conflicts%20in%20their%20dependencies:
$ pip install virtualenvwrapper
...
$ export WORKON_HOME=~/Envs
$ mkdir -p $WORKON_HOME
$ source /usr/local/bin/virtualenvwrapper.sh
Error: bash: /usr/local/bin/virtualenvwrapper.sh: No such file or directory
Ok, so I went looking for virtualenvwrapper.sh. Eventually I found it:
joanna#joanna-X441BA:~/.local/bin$ ls
...
virtualenv-clone
virtualenvwrapper_lazy.sh
virtualenvwrapper.sh
...
Tried again with the new path: joanna#joanna-X441BA:/$ source /home/.local/bin/virtualenvwrapper.sh
bash: /home/.local/bin/virtualenvwrapper.sh: No such file or directory
Agh, ok. Searched StackOverflow, followed the instructions in Issue installing Virtualenvwrapper on Ubuntu 18.04?:
export WORKON_HOME=$HOME/.virtualenv
export VIRTUALENVWRAPPER_LOG_DIR="$WORKON_HOME"
export VIRTUALENVWRAPPER_HOOK_DIR="$WORKON_HOME"
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
I checked that this is in fact where my python3 installation lives. Same result though - No such file or directory.
Also tried sudo apt-get update and it successfully updated a bunch of stuff. But still No such file or directory.
Following this article, I also tried
export VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python
export VIRTUALENVWRAPPER_VIRTUALENV=/usr/local/bin/virtualenv
(with the paths corrected for my filesystem setup.)
I don't understand why Bash is saying "No such file or directory" when it is in fact there (inexact error message that could be improved?), or more importantly how to fix this problem.
Note: This article warns me not to use sudo with pip because "If you think you need to use sudo, you're probably trying to modify a distribution-owned file", which is apparently Very Bad. I have also seen several articles warning that I should really use python -m pip install (or python3 -m pip install?) instead of plain pip install, because plain pip install can cause unintended side effects. I am not an expert by any means in these matters, but avoiding side effects sounds good to me.
Solution:
I finally got it to work! I kept playing around with it and double-checking that all the paths were correct and I did everything in the right order. I also had a stray installation of Python 3.8 that it seems I had installed in one of my folders that I use for code (my understanding is that Python installations should automatically go into one of the root folders like /usr/bin). On advice from my mentor I used the file manager to delete the stray Python 3.8, which may have helped with the virtualenvwrapper problem. After deleting the stray Python 3.8 I ran these commands and it finally worked!!!!
export WORKON_HOME=$HOME/.virtualenv
export VIRTUALENVWRAPPER_LOG_DIR="$WORKON_HOME"
export VIRTUALENVWRAPPER_HOOK_DIR="$WORKON_HOME"
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source ~/.local/bin/virtualenvwrapper.sh
My advice to readers of this question who are stumped with similar problems: triple-check the paths you are using by cd-ing into the folders, and also check for spelling errors in what you are typing. Your computer's file system may be set up slightly different from mine so don't blindly copy-paste the file paths I used. This will save you a lot of time and frustration. Also note that the file location of ~ is NOT the same as /home, that's one mistake I made.

Generating micropython + python code `.hex` file from the command line for the BBC micro:bit

Is it possible to generate a .hex file with MicroPython and my own python program code at a Linux command line, rather than in one of the editors?
Looking at the tag in your question, it looks like you want to use MicroPython on the BBC micro:bit, correct?
If that's the case then youu can use this Python command line tool: https://github.com/ntoll/uflash/
Instructions on how to install it and use it can be found in the README at that link.
This works with Python 2 and 3, and your Linux distribution is very likely to have at least one Python version available out-of-the-box.
If you have pip installed you can easily install it with: pip install uflash
But you can also download the source code, using git or downloading a zip file from GitHub (https://github.com/ntoll/uflash/archive/master.zip), and run it without installing anything. In this case you can execute the uFlash script with Python:
python uflash.py path_to_your_code.py
And the current version of uFlash includes the latest version of MicroPython for the micro:bit.
You can write the micropython code for the microbit in any text editor, such as vscode or vim. Save it as a .py file.
To create the .hex file, use the py2exe tool that is installed along with uflash when you install uflash using the command:
pip install uflash
To create a .hex file for a microbit micropython file called hello.py:
py2hex hello.py
This creates a file called hello.hex. This can be dragged and dropped onto your connected microbit through the file explorer. I use Nautilus and the microbit appears as 'MICROBIT'.
You can automate the creation and loading of the .hex file to the microbit using uflash, e.g.
uflash hello.py
This will create the .hex file and then load it onto an attached microbit. The .hex file will not be left on your file system though. The microbit has a habit of no longer being attached to the file system after loading a .hex file and needs to be re-attached in between builds.
Working Ubuntu 22.04 host CLI setup with Carlos Atencio's Docker to build your own firmware
After trying to setup the toolchain for a while, I finally decided to Google for a Docker image with the toolchain, and found https://github.com/carlosperate/docker-microbit-toolchain at this commit from Carlos Atencio, a Micro:Bit foundation employee, and that just absolutely worked:
# Get examples.
git clone https://github.com/bbcmicrobit/micropython
cd micropython
git checkout 7fc33d13b31a915cbe90dc5d515c6337b5fa1660
# Get Docker image.
docker pull ghcr.io/carlosperate/microbit-toolchain:latest
# Build setup to be run once.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest yt target bbc-microbit-classic-gcc-nosd#https://github.com/lancaster-university/yotta-target-bbc-microbit-classic-gcc-nosd
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest make all
sudo chmod -R +666 .
# Build one example.
tools/makecombinedhex.py build/firmware.hex examples/counter.py -o build/counter.hex
# Build all examples.
for f in examples/*; do b="$(basename "$f")"; echo $b; tools/makecombinedhex.py build/firmware.hex "$f" -o "build/${b%.py}.hex"; done
And you can then flash the example you want to run with:
cp build/counter.hex "/media/$USER/MICROBIT/"
What uflash does it to ship its own precompiled firmware.hex which is the part that requires the toolchain, and it then just uses that to build the combined hex in Python.
The cool thing is that now that we have the toolchain, we can also create examples directy in C/C++/assembly: How to compile C/C++ code into a .hex file for the BBC micro:bit? which can likely run much faster.
Previous failed attempts at setting it up myself
The Yotta package manager used by BBC Microbit bit rot almost immediately after it got was discontinued, making pip install yota approaches like: https://flames-of-code.netlify.app/blog/microbit-cpp-1/ very difficult.
The GCC gcc-arm-embedded toolchain PPA ppa:team-gcc-arm-embedded/ppa has also been discontinued: https://askubuntu.com/questions/1243252/how-to-install-arm-none-eabi-gdb-on-ubuntu-20-04-lts-focal-fossa and now you would have to download from an arm.com website.
Atencios' Docker setup explains how to do it though: https://github.com/carlosperate/docker-microbit-toolchain/blob/master/Dockerfile , the key is likely using his magically crafted requirements.txt, likely kept back from the day when things really worked, to avoid the infinitely many dependency issues of yotta. He's on Ubuntu 20.04.

pandoc: xelatex not found. xelatex is needed for pdf output

I have just upgraded my Macbook Pro OS to El Capitan (v10.11.4).
My attempt to export a Markdown file (created using Sublime Text 2, v2.0.2, build 2221) to pdf using pandoc is now failing, and I receive the following error:
pandoc: xelatex not found. xelatex is needed for pdf output
My output command is as follows:
pandoc doc1.md -o doc1.pdf --toc -V geometry:margin=1in --variable fontsize=10pt --variable fontfamily=utopia --variable linkcolor=blue --latex-engine=xelatex -f markdown-implicit_figures -s
Above command worked like a charm prior to installing El Capitan.
FYI - in searching for questions here I have not found one that gives a suitable answer.
For my case, add one line into ~/.bashrc solved the error:
export PATH=/Library/TeX/texbin:$PATH
Of course, the environment variable should be activated in the current term:
$ . ~/.bashrc
then run: $ make
the error disappears.
El Capitan's security features disable and remove the old symlink /usr/texbin. If you have MacTeX 2015, they should've been installed in /Library/TeX/texbin as well. You'll have to update the PATH your using to launch pandoc to include that folder. If you have a pre-2015 distribution of MacTeX, there are instructions here.
Linux Ubuntu instructions:
Tested on Ubuntu 18.04:
If you see this error on Linux Ubuntu:
pandoc: xelatex not found. xelatex is needed for pdf output
Then you need to install the texlive-xetex package like this:
sudo apt update
sudo apt install texlive-xetex
That solves it! Source where I learned this: TEX: XeLatex under Ubuntu.
In my particular case, I was trying to run this make_book.sh script to generate book.pdf, so I needed to do all of the following:
sudo apt update
sudo apt install pandoc
pip3 install MarkdownPP
sudo apt install texlive-xetex
cd path/to/repo
cd systemd-by-example
./make_book.sh
# You'll now have "book.pdf" inside directory "systemd-by-example"!
References:
https://github.com/jreese/markdown-pp - instructions to install MarkdownPP
https://tex.stackexchange.com/a/179811/168682 - instructions to install texlive-xetex

rpmbuild: Installed (but unpackaged) file(s) found - Multiple options tried

This is getting rather maddening - I'm trying to build an RPM out of some BASH scripts which work as Nagios plugins. I keep getting:
error: Installed (but unpackaged) file(s) found:
/usr/lib64/nagios/plugins/netappassigncheck
/usr/lib64/nagios/plugins/netappassignprep
In the %files directive of my spec file I have tried most of the combos that have been suggested here and on various other internet forums:
/usr/lib/nagios/plugins/*
/usr/lib/nagios/plugins/netappassigncheck
/usr/lib/nagios/plugins/netappassignprep
%dir /usr/lib/nagios/plugins/
And currently I am on
%dir %{_libdir}/nagios/plugins/
This is why my most recent error output is lib64, previous errors when quoting the full path were /usr/lib/...
These are the only 2 files that should make up the package as well.
Here is my .spec file
Name: netappautoassign
Summary: A set of Nagios Plugins for automatically assigning disks to a Netapp
Version: 1.0
Release: 1
License: %{license}
Group: Applications/System
Source: %{source}
URL: Reserved
Vendor: %{vendor}
Packager: %{packager}
BuildArch: noarch
Requires: bash, grep, util-linux, coreutils, expect, openssh-clients, bc, sed
Provides: netappassignprep, netappassigncheck
%description
Since Netapp's autoassign function may lead to disks being assigned to the
wrong head these NAGIOS plugins will ensure disks are added to the correct
head when replaced.
%prep
%setup -q
%build
%install
rm -rf %{buildroot}
install -d %{buildroot}%{_libdir}/nagios/plugins
cp netappassigncheck %{buildroot}%{_libdir}/nagios/plugins/
cp netappassignprep %{buildroot}%{_libdir}/nagios/plugins/
%files
%defattr(755,root,root,755)
%dir %{_libdir}/nagios/plugins/
%clean
rm -rf %{buildroot}
%post
And here's my ~/.rpmmacros
%_topdir %(echo $HOME)/rpmbuild
%_tmppath %{_topdir}/tmp
%buildroot %{_tmppath}/%{name}-%{version}
%license RESERVED
%source %{name}-%{version}.tar.gz
%vendor REDACTED
%packager REDACTED
EDIT - SOLVED
I'm not sure if this is a bug or desired behaviour, but it would appear that during the build setion the %{buildroot} variable was not being read in from .rpmmacros Having moved this variable into the main spec file the RPM is now built.
I'm not sure if this is a bug or desired behaviour, but it would appear that during the file verification section, it was reading in all the current active plugins under the root file system and not the %{buildroot}.
I suspected that the %{buildroot} variable was not being read in from .rpmmacros at this stage, although it was for all other stages.
I moved the declaration of %{buildroot} into my main .spec file and the build is now working!

Autoconf macros for Apache and conf.d install process?

I have a package that is using the autotools to build and install.
Part of the package is a website that can be run on the local machine.
So in the package there is a .conf file that is meant to be either
copied or linked to the /etc/apache2/conf.d directory. What's the
standard way that packages would do this? If possible, I'd like for
the user not to have an extra step to make the website work. I'd like
to have them install the package and then be able to browse to
http://localhost/newpackage to get up and running.
Also, is there a way that autoconf knows about the apache install or a
standard way through then environment some how? If someone could
point me in the right direction that would be great.
Steve
The first thing you should do is to locate the apache extension tool apxs or apxs2 (depends on apache version and/or platform you are building for). After you know where your tool is located you can run queries to get certain apache config params. For example to get system config dir you can run:
apxs2 -q SYSCONFDIR
Here is a snippet of how you can locate apache extension tool: (be careful it may contain syntax errors)
dnl Note: AC_DEFUN goes here plus other stuff
AC_MSG_CHECKING(for apache APXS)
AC_ARG_WITH(apxs,
[AS_HELP_STRING([[--with-apxs[=FILE]]],
[path to the apxs, defaults to "apxs".])],
[
if test "$withval" = "yes"; then
APXS=apxs
else
APXS="$withval"
fi
])
if test -z "$APXS"; then
for i in /usr/sbin /usr/local/apache/bin /usr/bin ; do
if test -f "$i/apxs2"; then
APXS="$i/apxs2"
break
fi
if test -f "$i/apxs"; then
APXS="$i/apxs"
break
fi
done
fi
AC_SUBST(APXS)
The way to use APXS in your automake Makefile.am would look something like this:
## Find apache sys config dir
APACHE2_SYSCONFDIR = `#APXS# -q SYSCONFDIR`
## Misc automake stuff goes here
install: install-am
cp my.conf $(DESTDIR)${APACHE2_SYSCONFDIR}/conf.d/my.conf
I assume you are familiar with automake and autoconf tools.