GitLab Pages CI hugo with asciidoctor (low maintenance) - gitlab-ci

What is the best low maintenance / future proof way to use GitLab Pages to build a hugo site with asciidoc pages (using asciidoctor).
There are some images on DockerHub, but they don't have reliable maintenance.
Ideally, I think it would be best to use the GitLab Pages image for hugo (extended) and install the asciidoctor gem.

Ok, the steps are:
install ruby
gem install asciidoctor
whitelist asciidoctor
The first two are performed in gitlab-ci.yml and the final step in config.toml.
Demo: https://gitlab.com/bquast1/bquast1.gitlab.io
The security policy change to whitelist asciidoctor is at the end of the config.toml file.

Related

can i Change jitsi front-end and download from own repository

I want to some changes in jitsi and install from my own repository.
Actually after some changes, i want to install it in many servers with each servers have unique domain.
You can change the front-end with two options :
Option 1 (recommended)
Follow the manual installation documentation and replace «jitsi-meet» component which is in /srv/jitsi-meet
This directory must contain your version of jitsi-meet (you must build the project, follow this step but clone your repository instead official repository)
Option 2 (not recommended)
Follow the quick installation then replace the /usr/share/jitsi-meet
This is the same step but in another directory
Be careful, this option is faster but each update on jisti-meet package (apt) can break your installation and override the /usr/share/jitsi-meet directory

Possible to publish docusaurus cross-repo from multiple-repositories, to aggregate docs together?

My question: Are there any docusaurus features out of the box (beyond https://github.com/facebook/docusaurus/pull/764) that will make the following easier? (I've asked this here because their github question template tells me issues of that type will be shut, and to ask them over here instead).
In my company we have several different repositories containing documentation in markdown and also markdown generated from source-code documentation from a variety of different coding languages.
I would like to explore using docusaurus to define a central site, but pull in documentation from a number of different repositories.
I'd like to do that:
to get a centralised search index
to aid discoverability
to get a centrally owned consistent theme/UX
to publish onwards into confluence so that non-technical users can find and browse content if that becomes the company policy to use ( :( )
to retain all the advantages of docs-close-to-code
This is the structure that docusaurus expects:
docs/ # all documentation should be placed here
website/
blog/
build/ # on yarn run build
core/
Footer.js
package.json
pages/
sidebars.json
siteConfig.js
static/
and this is the structure of published website that I'd like to end up with:
/v1/products/{product}/{version}/{language}/{content as from docs/}
# e.g.
/v1/products/spanner/{version}/en-GB/readme.html
/v1/internal/{gh-org}/{gh-repo}/{language}/{content as from docs/}
#e.g.
/v1/my-org/my-repo/{version}/en-GB/readme.html
/v1/my-org/my-repo/{version}/en-GB/proto-generated.html
(v1 is there because I predict I'll have forgotten something, and it lets me hedge against that and make later breaking-change redirects easier)
and I think therefore this is the intermediate structure I'll need to aggregate things into:
docs/
product/
language/
prose|generated-lang
gh-org/
repo/
language/
prose|generated-lang
website/
blog/
product/
language/
prose|generated-lang
gh-org/
repo/
language/
prose|generated-lang
core/
Footer.js
package.json
pages/
product/
language/
prose|generated-lang
gh-org/
repo/
language/
prose|generated-lang
sidebars.json
siteConfig.js
static/
product/
language/
prose|generated-lang
gh-org/
repo/
language/
prose|generated-lang
... does that hang together?
I can git clone via bash or submodules quite readily to arrange this; that's not particularly an issue. I want to know if there are things that exist already that will allow me to avoid needing to do that - e.g. native features of docs-site tools, bazel rules, whatever.
If you don't require a single page app and don't need React (docusaurus mentions this here), you can accomplish this using MkDocs as your static site generator and the multirepo plugin. Below are the steps to get it all setup. I assume you have Python installed and you created a Python venv.
python -m pip install git+https://github.com/jdoiro3/mkdocs-multirepo-plugin
mkdocs new my-project
cd my-project
Add the below to your newly created mkdocs.yml. This will configure the plugin.
plugins:
- multirepo:
repos:
- section: Repo1
import_url: {Repo1 url}
- section: Repo2
import_url: {Repo2 url}
- section: Repo3
import_url: {Repo3 url}
Now, you can run mkdocs serve or mkdocs build, which will build a static site with all the documentation in one site.
This will:
get a centralised search index to aid discoverability
get a centrally owned consistent theme/UX (I suggest using Material for MkDocs)
retain all the advantages of docs-close-to-code
A similar plugin could probably be written for docusaurus.
You can use a script to pull those md files, put them in the right location and then build docusaurus. You can do this with Github's actions upon any change to one of your source repos automatically
Docusaurus has the support of multi-instance. We have to use #docusaurus/plugin-content-docs plugin. Read more about it here https://docusaurus.io/docs/docs-multi-instance.

Are phantomjs binary artifacts hosted on some repository?

I would like to download phantomjs binary within a Gradle build script.
Where are they hosted ?
There is an official download repository for PhantomJS: https://bitbucket.org/ariya/phantomjs/downloads/
No.
They are only hosted on bitbucket: https://bitbucket.org/ariya/phantomjs/downloads/
To use them with Gradle, the only option found to be working so far is to use the gradle-download-task plugin. To implement caching, you can copy paste this code.
Otherwise, another potential option is to declare a custom ivy dependency, but so far I haven't found a way to get it to work. The issue is that bitbucket redirects to an Amazon S3 location, and the HEAD request that Gradle first issues to get the file size fails, probably because of a request signature problem.

How can I convert jekyll files to something that is easily accessible such as pdf

I want to convert jekyll files to a pdf/html or something that can be accessed easily without having to install any software / dependencies that I don't really use in daily basis.
have a look at jekyll-pandoc-multiple-formats.
I've personal had never had any success using it, but I do use the jekyll-pandoc for website
I'm not entirely sure of your goal, but assuming you have jekyll installed, you could always run jekyll serve and use a browser extension like this that has the ability to capture a screenshot of the entire webpage and save it as a PDF.
Jekyll does not require you to install any software to access the site. A Jekyll isntallation generates a _site folder. The contents in this folder are plain HTML and CSS. They are portable and offline accessible. Just copy this folder to your USB disk and take it with you. Jekyll IS the converter you are looking for.

Run Yard Server on Heroku

Is there a way to mount Yard (http://yardoc.org/guides/index.html) server on heroku ?
I did not find anything in the doc that explains how to do it.
Thanks a lot
This may have pitfalls I haven't uncovered yet (e.g. Yard caches its output files somewhere, given Heroku may often wipe the filesystem and re-slug it, you will lose the cache files and have to be regenerated), but it generally works and is very simple.
Create a new folder on your hard drive somewhere (I used ~/Sites/yard-on-heroku)
Create a new Gemfile in there, listing the gems you want to be available (if they aren't in the standard Heroku install). I used the following:
source 'https://rubygems.org'
gem 'sinatra'
gem 'rails'
gem 'yard'
Run bundle install to install the gems.
Create a file called Procfile and put the following in it:
web: yard server -p $PORT -g
Create a new git repository with git init
Commit your files to it (Gemfile*, Procfile)
Create a Heroku app with heroku create
Push your repo to Heroku with git push heroku master
And that's it. If you go to the Heroku URL given when you created the site in step 7, you'll see Yard running with all the gems available too it. If you want to specifically only show the gems listed in the Gemfile rather than all the Gems available by default including the ones in your Gemfile, then you can use -G instead of -g in the Procfile.
(my first ever answer on StackOverflow, so hope it's OK - any advice on improvements, gratefully received).
I wrote a nice tutorial with my solution to this problem here: http://benradler.com/blog/2014/05/27/deploy-yard-documentation-server-to-heroku/