I am working with Ubuntu 12.04.1 . I am learning to make a basic video player using FFmpeg library in C . My manual pages don't show any entries for the headers/functions of the library . Can someone please show me a way to add the documentation to my manual pages .
It is much easy to search that way than searching on a web page everytime .
PS : I have tried to add documentation to man pages using Synaptic package manager . I installed a ffmpeg-doc package . But it doesn't seem to work .
Thanks .
does this solve your problem -
http://ffmpeg-users.933282.n4.nabble.com/Building-man-pages-td934441.html
FFmpeg project use doxygen to create documentation. Doxygen can be configured to output man format.
Modify the file doc/Doxyfile like below, to tell doxygen you want man page format.
GENERATE_MAN = YES
MAN_LINKS = YES
MAN_LINKS option is very important, because if you omit it, you can not find the correct api call by name.
After you configure ffmpeg project by invoke ./configure ..., use the target apidoc to create man pages.
$ make apidoc
The man pages will output to doc/doxy/man/man3, then append this path to your man page search path.
$ export MANPATH=$MANPATH:`pwd`/doc/doxy/man
Then you can look up man pages for ffmpeg library api.
$ man av_register_all
Note
The man pages generated by doxygen for most of the api library call just a link to real source man page.
After open with man, you have to use key / to search and jump to documentation part you want.
Related
Can you point me to a tutorial which shows how to link to a dynamic library.
I created a dynamic library. Now I've got no clue how to include it into your project.
What I tried is
1.I copied the dylib and header folder into my project.
2. I gave library search path as $(PROJECT_DIR)
3. I gave header search path as $(PROJECT_DIR)/include.
Now it builds and links just fine. But when I run it, it gives me this error
.yld: Library not loaded: /usr/local/lib/test_dynamic_lib.dylib
Now i read in documentation that you have to install the library in that path. How to do that?
or you can manipulate runpaths. I didnt get a clue what it says. I'm actually a beginner in cocoa development.
Can you explain how to do that? Or point to a tutorial. I couldn't find any.
I found the answer.
I wrote a build script on my target.
export DYLIB=myLibrary.dylib
mkdir "$TARGET_BUILD_DIR/$TARGET_NAME.bundle/Contents/Frameworks"
cp -f "$SRCROOT/$DYLIB "$TARGET_BUILD_DIR/$TARGET_NAME.bundle/Contents/Frameworks"
install_name_tool -change #executable_path/$DYLIB #loader_path/../Frameworks/$DYLIB"$TARGET_BUILD_DIR/$TARGET_NAME.bundle/Contents/MacOS/$PRODUCT_NAM
And yes thnx The Paramagnetic Croissant for poiting me in the right direction.
I've created a Doxygen docset for an Objective-C/C (no C++) project by running:
doxygen
. . . and then in the output dir:
make
This results in a file called MyProject.docset . . The file install fine in eg Dash.app, however it doesn't give me any documentation in Xcode. I've tried the following:
Copy to ~/Library/Developer/Shared/Docsets
Copy to /Applications/Xcode. . . . /Docsets
Right-click, open with Xcode
According to Apple, the following documentation systems are fully supported:
Headerdoc
Doxygen
I've tried installing an Appledoc docset, and it works fine.
If it's docset contents hierarchy you're talking about: Xcode 5 changed the metadata format for describing the nodes. In previous versions they used flat list of tokens.xml files but with Xcode 5 the format is folder structure with xml files as leafs. The result is docset that should still be searchable but not showing any nodes when browsing in Xcode docs sidebar. Dash still supports old format, so the docset looks fine there. As for appledoc, this GitHub issue covers it.
I am collecting quite a lot of material in a GitHub wiki. I really like to use the wiki to cooperate with other people and IMHO the platform is really nice, I like it!
So, I would like to keep using the GH wiki to collect stuff, edit, save,etc but I also would like to export the content in order to create a pdf file that we can call "a manual".
I would like to generate an updated version of the manual automatically everytime I want just running a couple of scripts, I can not put too much effort on this.
I guess it is possible to export the content somehow and the use pandoc (http://johnmacfarlane.net/pandoc/) to create the pdf maybe adding an index and a style file.
Another interesting idea could be publish a website once a month dumping content directly from the wiki.
I guess other people already did something like this but I did not find anynthing.
Any idea?
But... the Github wiki of a GitHub repo is a git repo in itself (introduced in August 2010).
You can clone it, push to it or pull from it.
Each wiki is a Git repository, so you're able to push and pull them like anything else.
Each wiki respects the same permissions as the source repository.
Just add ".wiki" to any repository name in the URL, and you're ready to go.
Or, as noted by htafoya in the comments, replace the .git part of the URL (if present) by .wiki.
That makes the "export" part of your question really trivial.
From there, you will find tons of script for converting markdown pages into pdf:
a graddle task
a makefile
a python script
...
I'm adding to this answer, in case it helps any new readers :) here's what I did:
I installed GitHub Desktop: https://desktop.github.com/
Then, on the wiki page in my repository, I clicked "Clone in Desktop"
This saved the wiki locally as a .md file (after following the steps on screen)
I then used http://www.markdowntopdf.com/ to convert it to pdf
(Note: I renamed the files to remove characters that wouldn't work in a pdf file name before uploading to the website)
The end result was really nice.
I found many of the solutions difficult to reproduce/get the right version/understand/fix/etc... So instead, I'll present a patchwork docker solution to effortlessly convert on Windows(using git bash)/MacOS/Linux in 5 "easy" commands
git clone {project_url}.wiki .
# Convert *.md to *.md.html using the actual github pipeline
docker run --rm -e DOCKER_USER_ID=`id -u` -e DOCKER_GROUP_ID=`id -u` \
v "`pwd`:/src" -v "`pwd`:/out" andyneff/github-markdown-preview
# Fix hyperlinks, since wkhtmltopdf is stricter than github servers
docker run --rm -v `pwd`:/src -w /src perl \
perl -p -i -e 's|(.*?)|\1\L\2\E.md.html\L\3\E\4|g'\
*.html
# Lowercase all filename so that hyperlink match
docker run --rm -v `pwd`:/src -w /src python \
python -c 'import sys;import os; [os.rename(f, f.lower()) for f in sys.argv[1:]]' \
*.md.html
#Convert html to pdf using QT webkit
docker run -it --rm -e DOCKER_USER_ID=`id -u` -e DOCKER_GROUP_ID=`id -u`\
-v `pwd`:/work -w /work andyneff/wkhtmltopdf \
wkhtmltopdf --encoding utf-8 --minimum-font-size 14 \
--footer-left "[date]" --footer-right "[page] / [topage]" \
--footer-font-size 10 \
toc \
*.html document.pdf
The perl is the main part that may fail without a better solution. Pandoc has a really good filter solution, but isn't using the github pipeline.
Bugs:
Extra wide code blocks will be rendered with a scroll bar, and essentially cut off in the pdf. It would be best to make the code block not overflow, but you can add --user-style-sheet user.css to the wkhtmltopdf command (before toc/cover), and add to your user.css
.markdown-body .highlight pre,
.markdown-body pre{
overflow:visible !important;
}
Some link in the final pdf are off by +1 page, some are not. Not sure what the pattern is. But anchors with ids (#) do not appear to have this problem
Another option once you clone the wiki, especially if you are already using Atom is to use this Markdown to PDF package.
Worked great for me.
I found really annoying having to convert each markdown document separately (links between markdown documents are lost), so I ended up writting a simple C# program for my own use that does this in a single step: a) Download the last version of the wiki from Github, b) Convert it all the markdown documents merged as one pdf
You can download the binaries (Windows or any platform supporting Mono) from:
https://github.com/borjafdezgauna/CoderDocTools/releases/latest
If, for example, you want to convert to PDF the SimionZoo repository by user simionsoft, you can:
MarkdownToPDF.exe user=simionsoft project=SimionZoo output-file=SimionZoo.pdf
I've accomplished precisely this when creating the portable documentation for Barcode Writer in Pure PostScript:
GitHub Wiki + Makefile + pandoc → PDF
The process is described in this blog post.
This question has already been answered but wanted to add my quick experience here.
I didn't find it necessary to install the Desktop version of Github. You can clone by simply running the following from your commandline:
git clone git#github.com:<username>/<repository>.wiki.git
(Of course, replace username and repository as needed).
The cloned wiki outputted 72 markdown files. As has been previously said, there are numerous ways of converting these files do PDF, you can pick your own tool. However I will say that the easiest solution I encountered was to install Pandoc. I have macOS + homebrew, so a quick brew install pandoc was all I needed.
Some info on using pandoc here: https://stackoverflow.com/a/14908316/3638172
You can also try html_links_to_pdf!
It's a Python 3 script made just to convert a GitHub Wiki to pdf form, using the same styling that GitHub uses, but slightly cleaner.
I have Magento installed and I wanted to know how to generate the full API docs, like the ones on http://docs.magentocommerce.com/ that were generated using phpdoc. Is there a configuration file included with Magento for phpdoc that I can use to generate the documentation?
The actual program is called phpDocumentor and you can use it on the command-line to document the core Magento code by using phpdoc -d $MAGENTO_PATH/app/code/core/Mage/ -t docs. Don't forget get to create a directory called docs, or you can set the target directory to whatever you want.
To document the API of an extension you can use phpdoc -d $MAGENTO_PATH/app/code/local/$PACKAGE/$MODULE where $PACKAGE is the package name, and $MODULE is the name of the module, and $MAGENTO_PATH is where Magento is installed.
Warning: it could take a while to generate all the API documentation as Magento is a pretty big program.
I am working on a small toy project who is getting more and more releases. Until now, the documentation was just a set of pages in the wordpress blog I setup for the project. However, as time passes, new releases are out and I should update the online documentation to match the most recent release.
Unfortunately, if I do so, the docs for the previous releases will "disappear" as my doc pages are updated to the most recent version, therefore I decided to include the documentation in the release package and to keep the most recent documentation available online as a web page as well.
A trivial idea would be to wget the current docs from the wordpress pages, save them into the svn and therefore into the release package, repeating the procedure at every new release. Unfortunately, the HTML I get must be hacked by hand to fix the links (or I should hack wordpress to use BASE so that the HTML code is easily relocatable, something I don't want to do).
How should I handle the requirements of having at the same time:
user-browsable documentation for the proper version included in the downloadable package
most recent documentation available online (and properly styled with my web theme)
keep synchronized between the svn and the actual online contents (in wordpress, or something else that fits nicely with my wordpress setup)
easy to use
Thanks
Edit: started a bounty to see if I can lure more answers. I think this is a quite important issue, and it would be nice to have multiple hints and opinions for future readers.
I would check your pages into SVN, and then have your webserver update from its local SVN working copy when you're ready to release. Put everything into SVN--wordpress, CSS, HTML, etc.
WGet can convert all the links in the document for you. See the convert-links option:
http://www.gnu.org/software/wget/manual/html_node/Advanced-Usage.html
Using this in conjuction with the other methods could yield a solution.
I think there are two problems to be solved here
how and where to keep the documentation aligned with the code
where to publish the documentation
For 1 i think it's best to:
keep the documentation in a repository (SVN or git or whatever you already use for the code) as a set of files, instead of in a db as it is easier to keep a history of changes (an possibly to stay in par with the code releases
use an approach where the documentation is generated from a set of source files (you'd keep the sources in the repository) from which the html files for the distribution package or for publishing on the web are generated. The two could possibly differ, as on the web you'd need to keep some version information (in the URL) that you don't need when packaging a single release.
To do "2" there are several tools that may generate a static site. One of them is Jekyll it's in ruby and looks quite complete and customizable.
Assuming that you use a tool like jekyll and keep the files and source in SVN you might setup your repo in this way:
repo/
tags/
rel1.0/
source/
documentation/
rel2.0/
source/
documentation/
rel3.0/
source/
documentation/
trunk/
source/
documentation/
That is:
You keep the current documentation beside the source in the trunk
When you do a release you create a tag for the release
you configure your documentation generator to generate documentation for each of the repo/tags//documentation directory such that the documentation for each release is put in documentation_site/ directory
So to publish the documentation (point 2 above):
you copy on the server the contents of the documentation_site directory, putting it in the same base dir of your wordpress install or linking from that, such that each release doc can be accessed as: http://yoursite/project/docs/relXX/
you create a link to the current release documentation such that it can always be reached as http://yoursite/project/docs/current
The trick here is to publish the documentation always under a proper release identifier (in the URL, on the filesystem) and use a link (or a redirect) to make sure that the "current documentation" on the web server points to the current release.
I have seen some programs use help & manual. But I am a Mac user and I have no experience with it to know if it's any good. I'm looking for a solution myself for Mac.
For my own projects, if that were a need, I would create a sub-dir for the documentation, and have all the files refer from the known-base of there relatively. For example,
index.html -- refers to images/example.jpg
README
-- subdirs....
images/example.jpg
section/index.html -- links back to '../index.html',
-- refers to ../images/example.jpg
If the docs are included in the SVN/tarball download, then they are readable as-is. If they are generated from some original files, they would be pre-generated for a downloadable version.
Archive versions of the documentation can be unpacked/generated and placed into named directorys (eg docs/v1.05/)
Its a simple PHP script that can be written to get a list the subdirs of the /docs/ directory from the local disk and display a list, and highlighting the most recent, for example.