Possible to publish docusaurus cross-repo from multiple-repositories, to aggregate docs together? - documentation

My question: Are there any docusaurus features out of the box (beyond https://github.com/facebook/docusaurus/pull/764) that will make the following easier? (I've asked this here because their github question template tells me issues of that type will be shut, and to ask them over here instead).
In my company we have several different repositories containing documentation in markdown and also markdown generated from source-code documentation from a variety of different coding languages.
I would like to explore using docusaurus to define a central site, but pull in documentation from a number of different repositories.
I'd like to do that:
to get a centralised search index
to aid discoverability
to get a centrally owned consistent theme/UX
to publish onwards into confluence so that non-technical users can find and browse content if that becomes the company policy to use ( :( )
to retain all the advantages of docs-close-to-code
This is the structure that docusaurus expects:
docs/ # all documentation should be placed here
website/
blog/
build/ # on yarn run build
core/
Footer.js
package.json
pages/
sidebars.json
siteConfig.js
static/
and this is the structure of published website that I'd like to end up with:
/v1/products/{product}/{version}/{language}/{content as from docs/}
# e.g.
/v1/products/spanner/{version}/en-GB/readme.html
/v1/internal/{gh-org}/{gh-repo}/{language}/{content as from docs/}
#e.g.
/v1/my-org/my-repo/{version}/en-GB/readme.html
/v1/my-org/my-repo/{version}/en-GB/proto-generated.html
(v1 is there because I predict I'll have forgotten something, and it lets me hedge against that and make later breaking-change redirects easier)
and I think therefore this is the intermediate structure I'll need to aggregate things into:
docs/
product/
language/
prose|generated-lang
gh-org/
repo/
language/
prose|generated-lang
website/
blog/
product/
language/
prose|generated-lang
gh-org/
repo/
language/
prose|generated-lang
core/
Footer.js
package.json
pages/
product/
language/
prose|generated-lang
gh-org/
repo/
language/
prose|generated-lang
sidebars.json
siteConfig.js
static/
product/
language/
prose|generated-lang
gh-org/
repo/
language/
prose|generated-lang
... does that hang together?
I can git clone via bash or submodules quite readily to arrange this; that's not particularly an issue. I want to know if there are things that exist already that will allow me to avoid needing to do that - e.g. native features of docs-site tools, bazel rules, whatever.

If you don't require a single page app and don't need React (docusaurus mentions this here), you can accomplish this using MkDocs as your static site generator and the multirepo plugin. Below are the steps to get it all setup. I assume you have Python installed and you created a Python venv.
python -m pip install git+https://github.com/jdoiro3/mkdocs-multirepo-plugin
mkdocs new my-project
cd my-project
Add the below to your newly created mkdocs.yml. This will configure the plugin.
plugins:
- multirepo:
repos:
- section: Repo1
import_url: {Repo1 url}
- section: Repo2
import_url: {Repo2 url}
- section: Repo3
import_url: {Repo3 url}
Now, you can run mkdocs serve or mkdocs build, which will build a static site with all the documentation in one site.
This will:
get a centralised search index to aid discoverability
get a centrally owned consistent theme/UX (I suggest using Material for MkDocs)
retain all the advantages of docs-close-to-code
A similar plugin could probably be written for docusaurus.

You can use a script to pull those md files, put them in the right location and then build docusaurus. You can do this with Github's actions upon any change to one of your source repos automatically

Docusaurus has the support of multi-instance. We have to use #docusaurus/plugin-content-docs plugin. Read more about it here https://docusaurus.io/docs/docs-multi-instance.

Related

How can I separate generated artifacts from the main build with semantic UI?

I am trying to figure out how to integrate Semantic UI with my gulp-based frontend toolchain.
The npm artifact semantic-ui includes an interactive installer that will write a semantic.json file to the root of my project and install the less files, gulp tasks and some configuration into my project. All of these files will be put in subdirectories of a single base directory specified in semantic.json.
I do not want any dependency implementation files or any generated files in the git repository for my project because this will pollute revision history and lead to unneccessary merge conflicts. I would very much prefer to provide semantic.json only and .gitignore the semantic base directory. On npm install, the Semantic installer should install everything to the base directory specified in semantic.json. When building, I want the artifacts generated into a separate dist directory that does not reside under the semantic base directory.
However, if I do this, the installer will fail with a message stating that it cannot find the directories to update and drop me into the interactive installer instead. This is not what I want, because it means my build is no longer non-interactive (which will cause the CI build to fail).
How can I integrate Semantic UI into my build without having to commit Semantic and its generated artifacts into my git repository?
This is what we did in our similar scenario. The following are true:
Everything Semantic UI generates is in .gitignore. Therefore, the only Semantic UI files we have in our repo are:
semantic.json
semantic/src folder (this is where our theme modifications actually are)
semantic/tasks folder (probably doesn't need to be on git, but since it's needed for building, everything is simpler if we keep it in our repo)
We never need to (re)run the Semantic UI installer, everything is integrated in our own gulpfile.js.
Semantic UI outputs in our assets folder which is not in the same folder as its sources.
Semantic UI is updated automatically using npm as per the rules in my package.json.
Here are the steps needed to achieve this:
Install Semantic UI. By this I assume that you either used npm or cloned it from git (using npm is highly recommended), either way, you have semantic.json in your project's main folder and a semantic folder with gulpfile.js, src and tasks.
Make sure Semantic UI can be built. Navigate to semantic/ and run gulp build. This should create a dist folder in your semantic/ directory. Delete it and also delete Semantic UI's gulpfile.js since you'll no longer need it.
Edit semantic.json. You need to edit two lines:
Change "packaged": "dist/", to the path where you'd like to output semantic.css and semantic.js relative to Semantic UI's folder. In our case, it was "packaged": "../../assets/semantic/",
Change "themes": "dist/themes/" in the same way, since the themes/ folder contains fonts and images Semantic UI uses so it needs to be in the same folder as semantic.css. In our case, it was "themes": "../../assets/semantic/dist/themes/".
Edit your gulpfile.js so that it uses Semantic UI's build task. Add var semanticBuild = require('./semantic/tasks/build'); (if semantic/ is in the same folder as your gulpfile.js) and then simply register a task that depends on it, for example gulp.task('semantic', semanticBuild);.
Optionally, create a clean task. We used del for that.
gulp.task('clean:semantic', function(cb) {
del(['assets/semantic'], cb);
});

Can a git repository have N working trees

I try to write a file store based on libgit2.
Software snapshots should be saved as branches mysoftware and specific versions committed and tagged. Then later I want to checkout the tags to different directories.
When looking at git_checkout_tree, it seems like there is only one working tree for a repository and thus it does not even seem possible to checkout multiple working trees concurrently.
Is this correct!?
EDIT:
Additionally, I would like for this thing to work on Windows without the need for cygwin!
The git_checkout_opts structure in libgit2 contains a target_directory option that will allow git_checkout_tree() to write to a different directory instead of using the default working tree for the repository. This would allow you to custom build a solution with libgit2 that maintained multiple checked out copies.
Without using that option, a libgit2 git_repository object expects there will be just one working directory and looks to the core.worktree config option if it isn't the "natural" working directory for the repository.
The git-new-workdir tricks with symlinks in the .git directory don't work great with libgit2 right now, I'm afraid, and particularly doesn't work well on Windows. I'd love to see this addressed, but it isn't too high on my priority list.
Git doesn't support this natively, but you can use git-new-workdir to do it.

How to install play! framework modules?

I'm trying to install this pagination module for my Play! application, but can't get it to work. I've extracted the zip file inside /play/modules/paginate-head/ and I an example here on SO, to change my dependencies.yml file into:
# Application dependencies
require:
- play
- pagination -> paginate-head
repositories:
- My modules:
type: local
artifact: ${application.path}/../[module]
contains:
- paginate-head
But I still don't think the module is being loaded. I'm assuming it's documentation should appear on http://localhost:9000/#documentation/home or are there other ways to see if a module was loaded? It's not telling me anything in the console neither.
Any ideas how to get this installed?
You don't need to extract a zip file, just running the command
play install paginate-head
should work fine. But unzipping will also work. You also don't need that "repositories" section in your dependencies.yml file. Play! knows where to find modules.
The real issue is that your require should look like this:
require:
- play
- play -> paginate head
Notice play to the left of the '->' which signifies that it's a module. Also no dash between 'paginate' and 'head'. That's because 'paginate' is the module name and 'head' is the version and these should be separated by a space.
Also, for modules that are hosted in the main Play! modules repo, you don't even have to install them. You can just add the require above and start Play! and it will install it automatically. Though it will install under the applications modules directory, not the play modules directory.
Hope that helps!

Project organization in perforce

I created several web applications that use the same static files (css, js, images).
When I use svn for version control, I use an external repository (svn: externals) to add files to the current project.
For example:
- Project_1
---- Webapp
-------- Static (external to static's repo)
- Project_2
---- Webapp
-------- Static (external to static's repo)
I could easily use it in their web pages by adding a link like /static/ ...
But now our company has moved to perforce.
How can I support the current structure?
We also use maven, I think to pack these files as a jar and use as a dependency, but then my editor (idea) does not see that this dependence are js-scripts and styles.
And i need to repackage and deploy jar file when create minor changes.
How to use maven correctly?
Perforce has support for defining multiple mappings from the depot to your hard drive as part of the client spec. You could, for example, set the following:
Client Name: Sample_Maven
Client Root: c:\inetpub\wwwroot
//depot/Project_1/Webapp/... //Sample_Maven/Project_1/...
//depot/Project_2/Webapp/... //Sample_Maven/Project_2/...
//depot/Shared/static/... //Sample_Maven/static/...
... any other folder mappings you need to bring in and sync ...
Perforce won't handle multiple mapping of the shared static folder situation by itself, you will have to use junctions/symlinks in your file system to get the behavior you want. A word of caution though, make sure only one of the shared static folders is actually managed through Perforce. It can get slightly grumpy if resources get changed out from under it without it knowing about the changes.
Really though, you are probably better off (if you can) - having a single workspace/client spec per project - one for proj1 and one for proj2, each with their own mappings to the shared static folder. If you can structure things appropriately and just use maven to build each "project" things will go more smoothly.
For a Maven based solution, you could use WAR Overlays, sharing common resources across multiple web applications is exactly what overlays are for.
It seems you have a couple of choices, both called overlays:
a) Maven overlays as #Pascal suggests. Then you a struction like #Goyuix suggests to checkout the static content from Perforce.
b) Perforce overlays, which would allow you to have two different workspaces/client specs, one for each project, and in each import the static content into the expected place in the filesystem. This is the closest match to the subversion structure you were using before.

How to keep synchronized, per-version documentation?

I am working on a small toy project who is getting more and more releases. Until now, the documentation was just a set of pages in the wordpress blog I setup for the project. However, as time passes, new releases are out and I should update the online documentation to match the most recent release.
Unfortunately, if I do so, the docs for the previous releases will "disappear" as my doc pages are updated to the most recent version, therefore I decided to include the documentation in the release package and to keep the most recent documentation available online as a web page as well.
A trivial idea would be to wget the current docs from the wordpress pages, save them into the svn and therefore into the release package, repeating the procedure at every new release. Unfortunately, the HTML I get must be hacked by hand to fix the links (or I should hack wordpress to use BASE so that the HTML code is easily relocatable, something I don't want to do).
How should I handle the requirements of having at the same time:
user-browsable documentation for the proper version included in the downloadable package
most recent documentation available online (and properly styled with my web theme)
keep synchronized between the svn and the actual online contents (in wordpress, or something else that fits nicely with my wordpress setup)
easy to use
Thanks
Edit: started a bounty to see if I can lure more answers. I think this is a quite important issue, and it would be nice to have multiple hints and opinions for future readers.
I would check your pages into SVN, and then have your webserver update from its local SVN working copy when you're ready to release. Put everything into SVN--wordpress, CSS, HTML, etc.
WGet can convert all the links in the document for you. See the convert-links option:
http://www.gnu.org/software/wget/manual/html_node/Advanced-Usage.html
Using this in conjuction with the other methods could yield a solution.
I think there are two problems to be solved here
how and where to keep the documentation aligned with the code
where to publish the documentation
For 1 i think it's best to:
keep the documentation in a repository (SVN or git or whatever you already use for the code) as a set of files, instead of in a db as it is easier to keep a history of changes (an possibly to stay in par with the code releases
use an approach where the documentation is generated from a set of source files (you'd keep the sources in the repository) from which the html files for the distribution package or for publishing on the web are generated. The two could possibly differ, as on the web you'd need to keep some version information (in the URL) that you don't need when packaging a single release.
To do "2" there are several tools that may generate a static site. One of them is Jekyll it's in ruby and looks quite complete and customizable.
Assuming that you use a tool like jekyll and keep the files and source in SVN you might setup your repo in this way:
repo/
tags/
rel1.0/
source/
documentation/
rel2.0/
source/
documentation/
rel3.0/
source/
documentation/
trunk/
source/
documentation/
That is:
You keep the current documentation beside the source in the trunk
When you do a release you create a tag for the release
you configure your documentation generator to generate documentation for each of the repo/tags//documentation directory such that the documentation for each release is put in documentation_site/ directory
So to publish the documentation (point 2 above):
you copy on the server the contents of the documentation_site directory, putting it in the same base dir of your wordpress install or linking from that, such that each release doc can be accessed as: http://yoursite/project/docs/relXX/
you create a link to the current release documentation such that it can always be reached as http://yoursite/project/docs/current
The trick here is to publish the documentation always under a proper release identifier (in the URL, on the filesystem) and use a link (or a redirect) to make sure that the "current documentation" on the web server points to the current release.
I have seen some programs use help & manual. But I am a Mac user and I have no experience with it to know if it's any good. I'm looking for a solution myself for Mac.
For my own projects, if that were a need, I would create a sub-dir for the documentation, and have all the files refer from the known-base of there relatively. For example,
index.html -- refers to images/example.jpg
README
-- subdirs....
images/example.jpg
section/index.html -- links back to '../index.html',
-- refers to ../images/example.jpg
If the docs are included in the SVN/tarball download, then they are readable as-is. If they are generated from some original files, they would be pre-generated for a downloadable version.
Archive versions of the documentation can be unpacked/generated and placed into named directorys (eg docs/v1.05/)
Its a simple PHP script that can be written to get a list the subdirs of the /docs/ directory from the local disk and display a list, and highlighting the most recent, for example.