How do I generate the API docs/documentation for Magento? - api

I have Magento installed and I wanted to know how to generate the full API docs, like the ones on http://docs.magentocommerce.com/ that were generated using phpdoc. Is there a configuration file included with Magento for phpdoc that I can use to generate the documentation?

The actual program is called phpDocumentor and you can use it on the command-line to document the core Magento code by using phpdoc -d $MAGENTO_PATH/app/code/core/Mage/ -t docs. Don't forget get to create a directory called docs, or you can set the target directory to whatever you want.
To document the API of an extension you can use phpdoc -d $MAGENTO_PATH/app/code/local/$PACKAGE/$MODULE where $PACKAGE is the package name, and $MODULE is the name of the module, and $MAGENTO_PATH is where Magento is installed.
Warning: it could take a while to generate all the API documentation as Magento is a pretty big program.

Related

Possible to publish docusaurus cross-repo from multiple-repositories, to aggregate docs together?

My question: Are there any docusaurus features out of the box (beyond https://github.com/facebook/docusaurus/pull/764) that will make the following easier? (I've asked this here because their github question template tells me issues of that type will be shut, and to ask them over here instead).
In my company we have several different repositories containing documentation in markdown and also markdown generated from source-code documentation from a variety of different coding languages.
I would like to explore using docusaurus to define a central site, but pull in documentation from a number of different repositories.
I'd like to do that:
to get a centralised search index
to aid discoverability
to get a centrally owned consistent theme/UX
to publish onwards into confluence so that non-technical users can find and browse content if that becomes the company policy to use ( :( )
to retain all the advantages of docs-close-to-code
This is the structure that docusaurus expects:
docs/ # all documentation should be placed here
website/
blog/
build/ # on yarn run build
core/
Footer.js
package.json
pages/
sidebars.json
siteConfig.js
static/
and this is the structure of published website that I'd like to end up with:
/v1/products/{product}/{version}/{language}/{content as from docs/}
# e.g.
/v1/products/spanner/{version}/en-GB/readme.html
/v1/internal/{gh-org}/{gh-repo}/{language}/{content as from docs/}
#e.g.
/v1/my-org/my-repo/{version}/en-GB/readme.html
/v1/my-org/my-repo/{version}/en-GB/proto-generated.html
(v1 is there because I predict I'll have forgotten something, and it lets me hedge against that and make later breaking-change redirects easier)
and I think therefore this is the intermediate structure I'll need to aggregate things into:
docs/
product/
language/
prose|generated-lang
gh-org/
repo/
language/
prose|generated-lang
website/
blog/
product/
language/
prose|generated-lang
gh-org/
repo/
language/
prose|generated-lang
core/
Footer.js
package.json
pages/
product/
language/
prose|generated-lang
gh-org/
repo/
language/
prose|generated-lang
sidebars.json
siteConfig.js
static/
product/
language/
prose|generated-lang
gh-org/
repo/
language/
prose|generated-lang
... does that hang together?
I can git clone via bash or submodules quite readily to arrange this; that's not particularly an issue. I want to know if there are things that exist already that will allow me to avoid needing to do that - e.g. native features of docs-site tools, bazel rules, whatever.
If you don't require a single page app and don't need React (docusaurus mentions this here), you can accomplish this using MkDocs as your static site generator and the multirepo plugin. Below are the steps to get it all setup. I assume you have Python installed and you created a Python venv.
python -m pip install git+https://github.com/jdoiro3/mkdocs-multirepo-plugin
mkdocs new my-project
cd my-project
Add the below to your newly created mkdocs.yml. This will configure the plugin.
plugins:
- multirepo:
repos:
- section: Repo1
import_url: {Repo1 url}
- section: Repo2
import_url: {Repo2 url}
- section: Repo3
import_url: {Repo3 url}
Now, you can run mkdocs serve or mkdocs build, which will build a static site with all the documentation in one site.
This will:
get a centralised search index to aid discoverability
get a centrally owned consistent theme/UX (I suggest using Material for MkDocs)
retain all the advantages of docs-close-to-code
A similar plugin could probably be written for docusaurus.
You can use a script to pull those md files, put them in the right location and then build docusaurus. You can do this with Github's actions upon any change to one of your source repos automatically
Docusaurus has the support of multi-instance. We have to use #docusaurus/plugin-content-docs plugin. Read more about it here https://docusaurus.io/docs/docs-multi-instance.

Swagger codegen with multiple swagger json

I am trying to generate nodejs client sdk using https://github.com/swagger-api/swagger-codegen
Here is the command I use
swagger-codegen generate -i http://petstore.swagger.io/v2/swagger.json -l javascript -o ./petstore
But for the actual sdk which I need to generate the swagger spec is split in two different json files and I want to create a single sdk for both. How can I do this with Swagger-codegen, using multiple swagger json at same time?
Reuse the --input-spec option (or -i) for every location of the swagger spec, as URL or file.

Dropwizard serve external images directory

I have a dropwizard API app and I want one endpoint where I can run the call and also upload and image, these images have to be saved in a directory and then served through the same application context.
Is it possible with dropwizard? I can only find static assets bundles.
There is similar question already: Can DropWizard serve assets from outside the jar file?
The above module is mentioned in the third party modules list of dropwizard. There is also official modules list. These two lists are hard to find maybe because the main documentation doesn't reference them.
There is also dropwizard-file-assets which seems new. I don't know which module will work best for your case. Both are based on dropwizard's AssetServlet
If you don't like them you could use it as example how to implement your own. I suspect that the resource caching part may not be appropriate for your use case if someone replace the same resource name with new content: https://github.com/dirkraft/dropwizard-file-assets/blob/master/src/main/java/com/github/dirkraft/dropwizard/fileassets/FileAssetServlet.java#L129-L141
Edit: This is simple project that I've made using dropwizard-configurable-assets-bundle. Follow the instructions in the README.md. I think it is doing exactly what you want: put some files in a directory somewhere on the file system (outside the project source code) and serve them if they exist.

Download artifacts archive from Artifactory

I'm testing Artifactory 4.2.0 PRO.
The really nice feature is the possibility to download an archive of all files produced by the build by executing, something like:
curl -XPOST -u admin:password -H "Content-Type: application/json" http://localhost:8081/artifactory/api/archive/buildArtifacts -d '
{
"buildName": "Vehicle Routing Problem",
"buildNumber": "3",
"archiveType": "zip"
}' > build.zip
However, I'm unable to find if there is a possibility to do the same (download archive) when specifying exact properties using AQL. I have been trying to upload other artifacts with properties exactly the same as those pushed by the build, but they were not fetch by the snippet above (I assume some sort of metadata is stored somewhere).
What are the possibilities to fetch multiple artifacts without using many HTTP queries?
Regards.
The Retrieve Folder or Repository Archive API allows to download an archive file (supports zip/tar/tar.gz/tgz) containing all the artifacts that reside under the specified path (folder or repository root). However it does not support filtering by properties.
The Artifactory CLI supports concurrently downloading multiple files. It also supports downloading files we matches a set of property values. The CLI, however, will use multiple HTTP requests for doing so.
A third option would be developing a custom user plugin which allows downloading an archive of artifacts matching a set of properties. An execution user plugin can be executed as a REST API call. There is a sample plugin in the JFrogDev GitHub account which can serve as a good start point. This plugin allows downloading the content of a directory as an archive.

Adding Documentation of a library to manual pages

I am working with Ubuntu 12.04.1 . I am learning to make a basic video player using FFmpeg library in C . My manual pages don't show any entries for the headers/functions of the library . Can someone please show me a way to add the documentation to my manual pages .
It is much easy to search that way than searching on a web page everytime .
PS : I have tried to add documentation to man pages using Synaptic package manager . I installed a ffmpeg-doc package . But it doesn't seem to work .
Thanks .
does this solve your problem -
http://ffmpeg-users.933282.n4.nabble.com/Building-man-pages-td934441.html
FFmpeg project use doxygen to create documentation. Doxygen can be configured to output man format.
Modify the file doc/Doxyfile like below, to tell doxygen you want man page format.
GENERATE_MAN = YES
MAN_LINKS = YES
MAN_LINKS option is very important, because if you omit it, you can not find the correct api call by name.
After you configure ffmpeg project by invoke ./configure ..., use the target apidoc to create man pages.
$ make apidoc
The man pages will output to doc/doxy/man/man3, then append this path to your man page search path.
$ export MANPATH=$MANPATH:`pwd`/doc/doxy/man
Then you can look up man pages for ffmpeg library api.
$ man av_register_all
Note
The man pages generated by doxygen for most of the api library call just a link to real source man page.
After open with man, you have to use key / to search and jump to documentation part you want.