Generate Protobuf documentation? [closed] - documentation

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Does anyone know of a good tool to generate Google Protobuf documentation using the .proto source files?

[Update: Aug 2017. Adapted to the full Go rewrite of protoc-gen-bug, currently 1.0.0-rc]
The protoc-doc-gen, created by #estan (see also his earlier answer) provides a good and easy way to generate your documentation in html, json, markdown, pdf and other formats.
There are number of additional things that I should mention:
estan is no longer the maintainer of protoc-doc-gen, but pseudomuto is
In contrast to what I've read on various pages it is possible to use rich inline formatting (bold/italic, links, code snippets, etc.) within comments
protoc-gen-doc has been completely rewritten in Go and now uses Docker for generation (instead of apt-get)
The repository is now here: https://github.com/pseudomuto/protoc-gen-doc
To demonstrate the second point I have created an example repository to auto-generate the Dat Project Hypercore Protocol documentation in a nice format.
You can view various html and markdown output generation options at (or look here for a markdown example on SO):
https://github.com/aschrijver/protoc-gen-doc-example
The TravisCI script that does all the automation is this simple .travis.yml file:
sudo: required
services:
- docker
language: bash
before_script:
# Create directory structure, copy files
- mkdir build && mkdir build/html
- cp docgen/stylesheet.css build/html
script:
# Create all flavours of output formats to test (see README)
- docker run --rm -v $(pwd)/build:/out -v $(pwd)/schemas/html:/protos:ro pseudomuto/protoc-gen-doc
- docker run --rm -v $(pwd)/build/html:/out -v $(pwd)/schemas/html:/protos:ro -v $(pwd)/docgen:/templates:ro pseudomuto/protoc-gen-doc --doc_opt=/templates/custom-html.tmpl,inline-html-comments.html protos/HypercoreSpecV1_html.proto
- docker run --rm -v $(pwd)/build:/out -v $(pwd)/schemas/md:/protos:ro pseudomuto/protoc-gen-doc --doc_opt=markdown,hypercore-protocol.md
- docker run --rm -v $(pwd)/build:/out -v $(pwd)/schemas/md:/protos:ro -v $(pwd)/docgen:/templates:ro pseudomuto/protoc-gen-doc --doc_opt=/templates/custom-markdown.tmpl,hypercore-protocol_custom-template.md protos/HypercoreSpecV1_md.proto
deploy:
provider: pages
skip_cleanup: true # Do not forget, or the whole gh-pages branch is cleaned
name: datproject # Name of the committer in gh-pages branch
local_dir: build # Take files from the 'build' output directory
github_token: $GITHUB_TOKEN # Set in travis-ci.org dashboard (see README)
on:
all_branches: true # Could be set to 'branch: master' in production
(PS: The hypercore protocol is one of the core specifications of the Dat Project ecosystem of modules for creating decentralized peer-to-peer applications. I used their .proto file to demonstrate concepts)

An open source protobuf plugin that generates DocBook and PDF from the proto files.
http://code.google.com/p/protoc-gen-docbook/
Disclaimer: I am the author of the plugin.

In Protobuf 2.5 the "//" comments you put into your .proto files actually makes it into the generated java source code as Javadoc comments. More specifically the protoc compiler will take your "//" comments like this:
//
// Message level comments
message myMsg {
// Field level comments
required string name=1;
}
will go into your generated java source files. For some reason protoc encloses the Javadoc comments in <pre> tags. But all in all it is a nice new feature in v2.5.

In addition to the askldjd's answer, I'd like to point out my own tool at https://github.com/estan/protoc-gen-doc . It is also a protocol buffer compiler plugin, but can generate HTML, MarkDown or DocBook out of the box. It can also be customized using Mustache templates to generate any text based format you like.
Documentation comments are written using /** ... */ or /// ....

The thread is old, but the question still seems relevant. I have had very good results with doxygen + proto2cpp. proto2cpp works as an input filter for doxygen.

Doxygen supports so called input filters, which allow you to transform code into something doxygen understands. Writing such a filter for transforming the Protobuf IDL into C++ code (for example) would allow you to use the full power of Doxygen in .proto files. See item 12 of the Doxygen FAQ.
I did something similar for CMake, the input filter just transforms CMake macros and functions to C function declarations. You can find it here.

Since the .proto file is mostly just declaration, I usually find that the source file with inline comments is straightforward and effective documentation.

https://code.google.com/apis/protocolbuffers/docs/techniques.html
Self-describing Messages
Protocol Buffers do not contain descriptions of their own types. Thus,
given only a raw message without the corresponding .proto file
defining its type, it is difficult to extract any useful data.
However, note that the contents of a .proto file can itself be
represented using protocol buffers. The file
src/google/protobuf/descriptor.proto in the source code package
defines the message types involved. protoc can output a
FileDescriptorSet – which represents a set of .proto files – using the
--descriptor_set_out option. With this, you could define a
self-describing protocol message like so:
message SelfDescribingMessage { // Set of .proto files which define
the type. required FileDescriptorSet proto_files = 1;
// Name of the message type. Must be defined by one of the files in
// proto_files. required string type_name = 2;
// The message data. required bytes message_data = 3; }
By using classes like DynamicMessage (available in C++ and Java), you
can then write tools which can manipulate SelfDescribingMessages.
All that said, the reason that this functionality is not included in
the Protocol Buffer library is because we have never had a use for it
inside Google.

Related

In MkDocs, how to include script-generated content?

How can I include script-generated content into documentation built by mkdocs (or mike)?
For example, I'd like to present a table of the command-line arguments of a program, and currently the most complete listing (and descriptions) for those arguments (as supported by a given version of the program) is the output from the program itself (invoked with the --help flag). Similarly as for API documentation, if I was to manually transcribe into a markdown file then there would be a maintenance burden to prevent drift between the docs and codebase. Do I need to pre-orchestrate a script that generates entire markdown files, or is there a way to embed shell commands within the docs source (to be evaluated automatically when the docs are built)?

Can we make Spoon's output follow the same directory path as the original?

For now, Spoon's directory structure of output will follow the package path written in *.java file. In fact, there are many other files even *.java files, whose real file paths are different from package paths.
So, my Spoon's output folder was disordered.
In short the answer for this question is: no.
Spoon uses standard Java organization to process output files, meaning: each Java file is output in its package hierarchy as it should be done for source files (see: https://docs.oracle.com/javase/tutorial/java/package/managingfiles.html).
However if your problem is related to file created because of inner classes: you could solve it using the following option:
[--output-type ]
States how to print the processed source code:
nooutput|classes|compilationunits (default: classes)
with value "compilationunits".
Finally if it's really an issue for you, don't hesitate to propose a new feature through a pull request on the Github repository!

Whole site compilation of markdown/pandoc? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
With Sphinx-doc, you can create a bunch of ReStructureText files, with an index.rst file which includes a table of contents macro that auto generates a table of contents from the other included files, and a conf.py that acts as a compilation config. You can then compile the lot into a single python-doc-style site, complete with index, navigation tools, and a search function.
Is there any comparable tool for markdown (preferably pandoc-style markdown)?
Some static site generators that work with Markdown:
Jekyll is very popular and also the engine behind GitHub pages.
Python variants: Hyde or Pelican
nanoc (used f.ex. in the GitHub API documentation)
Middlemanapp: maybe the best one?
I think none of them use pandoc (maybe because it's written in Haskell), but they all use an enhanced Markdown syntax or can be configured to use pandoc.
Other interesting ways to generate a site from markdown:
Markdown-Wikis that are file based: f.ex. Gollum, the Wiki-Engine that is also used by GitHub
Telegram: commercial; written by David Pollak, the inventor the Lift-Scala-framework
Engines that use Pandoc:
Gitit: Pandoc Wiki
Hakyll: Haskell library to generate static sites
Pandoc-Plugin forIkiwiki
Yst static site generator
Gouda - generates a site from a directory of markdown files
Rippledoc - generates a navigable site from nested directories of markdown files
The definitive listing of Static Site Generators
A good overview of static site generators: http://staticsitegenerators.net/
Pandoc, the GNU make and sed commands, sprinkled with some CSS are all you need to fully automate the creation of a static website starting from Markdown.
Pandoc offers three command line options which can provide navigation between pages as well as the creation of a table of contents (TOC) based on the headings inside the page. With CSS you can make the navigation and TOC look and behave the way you want.
-B FILE, --include-before-body=FILE
Include contents of FILE, verbatim, at the beginning of the document body (e.g. after the tag in HTML, or the \begin{document} command in LaTeX). This can be used to include navigation bars or banners in HTML documents. This option can be used repeatedly to include multiple files. They will be included in the order specified. Implies --standalone.
--toc, --table-of-contents
Include an automatically generated table of contents.
--toc-depth=NUMBER
Specify the number of section levels to include in the table of contents.
The default is 3 (which means that level 1, 2, and 3 headers will be listed in the contents).
As a matter of fact, my personal website is built this way. Check out its makefile for more details. It is all free-libre open-source licensed under the GNU GPL version 3.
If you're OK not using Pandoc, mkdocs would seem to fit your needs.
If you definitely want to use Pandoc-flavoured Markdown, you could check out pdsite. I wrote it as a way to get mkdocs-style site generation with more input formats (like emacs org-mode) and without the Python dependencies - you pass in a folder of Markdown files and get out an HTML site (including automatically-generated multi-level navigation links). It's similar to Serge Stroobandt's approach in that it's just a shell script (it does require you to have tree installed however). Unfortunately it doesn't have a search function yet, although that wouldn't be too hard to add...

Why does Doxygen still executed pdftex despite `GENERATE_LATEX=NO`?

I've a reasonable large C/C++ project and want to generate only a HTML version of the documentation by Doxygen. In my Doxyfile I've written
GENERATE_LATEX = NO
In deed, there is no latex directory in the specified output directory; just html. However, I'm getting output by pdfTex on stderr:
...
This is pdfTeX, Version 3.1415926-1.40.11 (TeX Live 2010)
restricted \write18 enabled.
entering extended mode
(./_formulas.tex
LaTeX2e <2009/09/24>
...
Output written on _formulas.dvi (279 pages, 49376 bytes).
Transcript written on _formulas.log.
...
Why?
Doxygen's installation instructions list LaTeX as an optional (additional) tool, which is not required. Thus I assume, it is not required for basic functionality of HTML generation.
How can I make Doxygen not execute pdfTex? (no, I do not want to uninstall *TeX on my machine)
The file _formulas.tex is created as the result of using formulas.
If you want formulas without the need for LaTeX you can set USE_MATHJAX to YES.

How do the Mogenerator parameters work, which can I send via Xcode? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
The help for Mogenerator is very minimal. What do all the parameters do?
Parameters that work both via the command line utility and Xcode:
--base-class: The name af the base class which the "private class" (e.g. _MyObject.h) will inherit from. This will also add an import in the form of #import "MyManagedObject.h" to the same .h file. Tip: if the class you want to inherit from is located in a library, the default import statement won't work. As a workaround, you could have an extra level of inheritance for each project you create and have that class inherit from the library on (e.g. set the base class to MyProjectManagedObject which you create manually and inherit from MyLibManagedObject).
--template-path: The path to where the 4 .motemplate files are located. When this is not provided, it will look at all the "app support directories" (e.g. "/Library/Application Support/mogenerator/").
--template-group: A subdirectory name underneath the template-path directory to use.
--template-var arc=true: Required for the generated files to compile while using ARC.
--output-dir: The output directory for all generated files.
--machine-dir: The directory where the _<class>.h and _<class>.m will be output to. If --output-dir is also defined, this parameter takes precedence.
--human-dir: The directory where the <class>.h and <class>.m will be output to. If --output-dir is also defined, this parameter takes precedence.
--includem: the full path to a file that will include all the #import for all the .h files that are created. This file does not need to exist (i.e. it will be created for you if it doesn't). This file, will not be included in the project automatically for you. You must include it manually by dragging it into the Groups & Files list of your project.
Using relative paths in Xcode for any of the above arguments won't work since the working directory is set to one of the root directories of the system (e.g. Applications, Developer, Library, or System). (I haven't had enough time to figure out which one of these it is exactly.)
Parameters that cannot be used in Xcode:
--model: The path to the .xcdatamodel file, cannot be set in Xcode.
--list-source-files
--orphaned
--versioned
--help
Running and sending parameters to xmod via Xcode:
(Update: I haven't tried this on Xcode 4, only Xcode 3. For Xcode 4, you can add mogenerator as a build phase instead of following the following steps.)
Go to the info page of the .xcdatamodel file.
Choose the Comments tab.
Add xmod to the comments field, on its own line.
Every time you save the model, it will regenerate the machine files for you.
To send parameters, they must be on their own line(s):
This works:
xmod
--base-class CLASS
--template-path PATH
And even this works:
xmod
--base-class CLASS --template-path PATH
But, this won't work:
xmod --base-class CLASS --template-path PATH
Note: You must close the Info window for the settings to take effect.
As of XCode 4, the Info window is no longer available, so don't be concerned if you can't set it up as answered above.
Use John Blanco's guide to set up a scripting target which allows you to pass command-line arguments directly to mogenerator. Note that you might have to tweak the paths in his example slightly... toss a pwd in the script and check the paths against the script's working directory if it doesn't run for you right away.
For a list of available command-line arguments, run mogenerator --help in the Terminal. AFAICT, all of them work from the scripting step.
See this answer for another way to invoke mogenerator via a "pre-action" if you want to automatically rebuild your machine files with every build. There's also a good tip on putting a mogenerator script into your VCS.
Here is the output from --help as of version 1.27
mogenerator: Usage [OPTIONS] <argument> [...]
-m, --model MODEL Path to model
-C, --configuration CONFIG Only consider entities included in the named configuration
--base-class CLASS Custom base class
--base-class-import TEXT Imports base class as #import TEXT
--base-class-force CLASS Same as --base-class except will force all entities to have the specified base class. Even if a super entity exists
--includem FILE Generate aggregate include file for .m files for both human and machine generated source files
--includeh FILE Generate aggregate include file for .h files for human generated source files only
--template-path PATH Path to templates (absolute or relative to model path)
--template-group NAME Name of template group
--template-var KEY=VALUE A key-value pair to pass to the template file. There can be many of these.
-O, --output-dir DIR Output directory
-M, --machine-dir DIR Output directory for machine files
-H, --human-dir DIR Output directory for human files
--list-source-files Only list model-related source files
--orphaned Only list files whose entities no longer exist
--version Display version and exit
-h, --help Display this help and exit
Implements generation gap codegen pattern for Core Data.
Inspired by eogenerator.
Also, maybe will be helpful.
For determining which params can be used for
--template-var KEY=VALUE
open *.motemplate file, and find string like "TemplateVar." after point you will see parameter name and will able to understand what it do.
This params has built-in template
--template-var arc=true
--template-var frc=true
--template-var modules=true