Apidocjs document creation issue, warning : plugin param parser not found and missing comma issue - api

I tried to create API documentation using apidocjs and i got issues while compiling project for creating apidoc using apidoc.json on project folder.
Code here :
~$ apidoc -i ./ -o apidoc/
And result
warning: parser plugin 'param' not found.
error: Error: Can not read: apidoc.json, please check the format (e.g. missing comma).
Please anyone help me to fix this issue
Tags related to apidocjs.com

Removing the apidoc destination folder as #Prasanth suggests will destroy your history if you are using the #apiVersion feature. The only way you could rebuild it is to go through and checkout each version, then run the apidoc. So, if you want to use versioning. This is not your answer.
You may have syntax issues or some other configuration issue.
In my case, since updating I had some functions documented in javadoc style with #param... this used to be ignored but now throws the warning.

~$ apidoc -i ./ -e apidoc/ -o apidoc/
when api document generates, it generates main.js file with #param.
and as that #param will also be checked next time of generation, you can just skip that output directory and all are good to go.

Issue is fixed for me as well, This error comes from the generated docs/main.js.
Usually we should parse only necessary files, to generate this apiDoc.
I used -f .php in my command, like apidoc -f .php -i ./ -o ./. This is working like a miracle.

I had the same proble. I'm using custom template and template folder was in the same location as my *.js files which are parsed including main.js template file. -e option did not worked for me so I moved template folder to other location and everything works fine.

I had the same issue in rebar using erlang and what I did was in the project root folder:
rm -rf doc/
then I run again apidoc
info: Done

For fix this bug, you can try to save package.json and apidoc.json without UTF-BOM symbol.
Some versions, parts or dependecies of apidoc use code similar with this
JSON.parse(fs.readFileSync(file, 'utf-8')) // throw "Uncaught SyntaxError: Unexpected token in JSON at position 0"

Related

How do I specify JRE when creating a Bamboo sidekick agent for their per-build-container plug-in?

Trying to get the sidekick image built and having some issues. Is there any documentation other than the README.md file?
My current problem is with getting the JRE requirement working but there are others. The page says "download Oracle JRE and place it inside the working directory. Optionally if you have a company wide distribution url, use that one at a later step." and the help says "Java (JRE) download url or path inside working directory". Have not been able to get this to work.
I went to the JRE link provided and was presented with options to download a rpm file or a tar.gz file. Which is expected (was unable to get either one working)?
It says to place the file in the "working directory" but not sure where exactly. Tried in sidekick folder and in sidekick/jre both without success no matter what I used after the -j command. Is this just the path or should the filename be included as well? Can I get an example?
I'm running this script using my login but noticed the output folder is being created with root user and group. I see no indication that this should be run with sudo. What is the correct way to run this script?
Using debug, I see the function "download if not cached". Can I save these files (JRE, Bamboo jar file, etc.) somewhere so I don't have to worry about downloading them? If so, where should they go? Looks like I might have a problem with the wget to d/l the jar file so would like to just be able to place all these in a folder and be done with it.
It looks like the major problem is the script didn't clean up after itself if it fails. The issue was the first time it failed then that caused subsequent issues as the output folder was already there. Removing this directory between each attempt help.
As for the correct syntax for the -j JRE option I manually downloaded the JRE and placed in a folder called per-build-container/sidekick/stuff/. For the command line it is not just the path but the file name as well (the tar.gz and not the RPM). For my case it was
-j stuff/jre-8u251-linux-x64.tar.gz
Note I also ran the script as sudo. Wasn't stated but seemed to work OK.
Another issue I ran into was the download of the agent jar file. There is a redirect in the wget file that was not working for us. I ended up editing the script and replacing the Altassian based url with the redirected one.
This addresses all the issues I ran into with the initial question.

How can I change the project in BigQuery

When configuring the Big-Query command line tool I was asked to set up a default project. However, now I have many projects and I can't happen to find how do I switch from one project to the other. Does someone know how to navigate the projects using the command line? Thanks in advance!
G
You have two ways to do it:
Specify --project_id global flag in bq. Example: bq ls -j --project_id <PROJECT>
Change default project by issuing gcloud config set project <PROJECT>
We were also facing the same issue, and tried the setting --project_id param and default project using gcloud config set project command, but no luck.
Then we created the .bigqueryrc file and passed the path via --bigqueryrc flag, it worked.
content of the .bigqueryrc file
--project_id=gcp_project_id
Example:
bq --bigqueryrc=/path_to_file/.bigqueryrc ls -j
https://cloud.google.com/bigquery/docs/bq-command-line-tool#setting_default_values_for_command-line_flags
The default project that bq respected for me was stored in $HOME/.config/gcloud/configurations/config_default (Linux)
In my case all aforementioned options were overridden by a key "quota_project_id" in ~/.config/gcloud/application_default_credentials.json
"quota_project_id": "my-project",

Sass Intellij File Watcher Output path Ubuntu

I am trying to compile a .scss file in sass directory to a sibling css directory. However, I am not able to. I dint find enough documentation on the file watcher plugin as well.
Currently, it is compiling into the sass directory. And I need to compile into css directory.
I am able to compile it manually using
sass --watch sass/file.scss:css/file.css
How do I do it using Intellij File Watcher plugin?
I tried using the macros but I dont think I understand macros much, because I either get directory not found or .scss file not found. I am aware that I have to change the argument input in some way, but
--watch sass/file.scss:css/file.scss
dint work.
Pl help.
I am guessing you may have figured this out by now. But this may be helpful for future searches.
You don't need to add sass --watch to the arguments because that happens implicitly via the Intellij watcher.
But lets say you have a directory structure like so...
-C
-path
-to
-css
styles.css
-sass
styles.scss
You would do something like this in Arguments:
--no-cache --update C:\path\to\css\sass:C:\path\to\css
And then set your working directory to C:\path\to\css\sass
Example:

Effortless export from github wiki (?)

I am collecting quite a lot of material in a GitHub wiki. I really like to use the wiki to cooperate with other people and IMHO the platform is really nice, I like it!
So, I would like to keep using the GH wiki to collect stuff, edit, save,etc but I also would like to export the content in order to create a pdf file that we can call "a manual".
I would like to generate an updated version of the manual automatically everytime I want just running a couple of scripts, I can not put too much effort on this.
I guess it is possible to export the content somehow and the use pandoc (http://johnmacfarlane.net/pandoc/) to create the pdf maybe adding an index and a style file.
Another interesting idea could be publish a website once a month dumping content directly from the wiki.
I guess other people already did something like this but I did not find anynthing.
Any idea?
But... the Github wiki of a GitHub repo is a git repo in itself (introduced in August 2010).
You can clone it, push to it or pull from it.
Each wiki is a Git repository, so you're able to push and pull them like anything else.
Each wiki respects the same permissions as the source repository.
Just add ".wiki" to any repository name in the URL, and you're ready to go.
Or, as noted by htafoya in the comments, replace the .git part of the URL (if present) by .wiki.
That makes the "export" part of your question really trivial.
From there, you will find tons of script for converting markdown pages into pdf:
a graddle task
a makefile
a python script
...
I'm adding to this answer, in case it helps any new readers :) here's what I did:
I installed GitHub Desktop: https://desktop.github.com/
Then, on the wiki page in my repository, I clicked "Clone in Desktop"
This saved the wiki locally as a .md file (after following the steps on screen)
I then used http://www.markdowntopdf.com/ to convert it to pdf
(Note: I renamed the files to remove characters that wouldn't work in a pdf file name before uploading to the website)
The end result was really nice.
I found many of the solutions difficult to reproduce/get the right version/understand/fix/etc... So instead, I'll present a patchwork docker solution to effortlessly convert on Windows(using git bash)/MacOS/Linux in 5 "easy" commands
git clone {project_url}.wiki .
# Convert *.md to *.md.html using the actual github pipeline
docker run --rm -e DOCKER_USER_ID=`id -u` -e DOCKER_GROUP_ID=`id -u` \
v "`pwd`:/src" -v "`pwd`:/out" andyneff/github-markdown-preview
# Fix hyperlinks, since wkhtmltopdf is stricter than github servers
docker run --rm -v `pwd`:/src -w /src perl \
perl -p -i -e 's|(.*?)|\1\L\2\E.md.html\L\3\E\4|g'\
*.html
# Lowercase all filename so that hyperlink match
docker run --rm -v `pwd`:/src -w /src python \
python -c 'import sys;import os; [os.rename(f, f.lower()) for f in sys.argv[1:]]' \
*.md.html
#Convert html to pdf using QT webkit
docker run -it --rm -e DOCKER_USER_ID=`id -u` -e DOCKER_GROUP_ID=`id -u`\
-v `pwd`:/work -w /work andyneff/wkhtmltopdf \
wkhtmltopdf --encoding utf-8 --minimum-font-size 14 \
--footer-left "[date]" --footer-right "[page] / [topage]" \
--footer-font-size 10 \
toc \
*.html document.pdf
The perl is the main part that may fail without a better solution. Pandoc has a really good filter solution, but isn't using the github pipeline.
Bugs:
Extra wide code blocks will be rendered with a scroll bar, and essentially cut off in the pdf. It would be best to make the code block not overflow, but you can add --user-style-sheet user.css to the wkhtmltopdf command (before toc/cover), and add to your user.css
.markdown-body .highlight pre,
.markdown-body pre{
overflow:visible !important;
}
Some link in the final pdf are off by +1 page, some are not. Not sure what the pattern is. But anchors with ids (#) do not appear to have this problem
Another option once you clone the wiki, especially if you are already using Atom is to use this Markdown to PDF package.
Worked great for me.
I found really annoying having to convert each markdown document separately (links between markdown documents are lost), so I ended up writting a simple C# program for my own use that does this in a single step: a) Download the last version of the wiki from Github, b) Convert it all the markdown documents merged as one pdf
You can download the binaries (Windows or any platform supporting Mono) from:
https://github.com/borjafdezgauna/CoderDocTools/releases/latest
If, for example, you want to convert to PDF the SimionZoo repository by user simionsoft, you can:
MarkdownToPDF.exe user=simionsoft project=SimionZoo output-file=SimionZoo.pdf
I've accomplished precisely this when creating the portable documentation for Barcode Writer in Pure PostScript:
GitHub Wiki + Makefile + pandoc → PDF
The process is described in this blog post.
This question has already been answered but wanted to add my quick experience here.
I didn't find it necessary to install the Desktop version of Github. You can clone by simply running the following from your commandline:
git clone git#github.com:<username>/<repository>.wiki.git
(Of course, replace username and repository as needed).
The cloned wiki outputted 72 markdown files. As has been previously said, there are numerous ways of converting these files do PDF, you can pick your own tool. However I will say that the easiest solution I encountered was to install Pandoc. I have macOS + homebrew, so a quick brew install pandoc was all I needed.
Some info on using pandoc here: https://stackoverflow.com/a/14908316/3638172
You can also try html_links_to_pdf!
It's a Python 3 script made just to convert a GitHub Wiki to pdf form, using the same styling that GitHub uses, but slightly cleaner.

Class-Dump Installation

So this may sound like a really stupid question and I HAVE looked at the how-to from the parent website, but no matter what I do, I cannot get this program to even start to install...
I tried entering:
cd /opt/local/bin/portslocation/dports/class-dump
which returned a "this file/director doesnt exist" error, so i tried to get to it folder by folder. when i got all the way to:
cd /opt/local/bin/
i cannot go any further. when i check the contents of the bin directory, the only files i can find are (and i cannot access these apparently either):
"daemondo port portf portindex portmirror"
i have tried doing this on 2 computers so far to no avail, macports is installed on both like the website said and i am having trouble finding any support for it. please and thank you!!
Unless you're trying to develop it, why screw around? Use homebrew. As of today, and modulo a sudo here or there, you can install homebrew with
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
If this doesn't work for you, check the command line at the top of the page on homebrew.†
After this, install class-dump with
brew install class-dump
Done.
† Ping me in the comments with "command line update needed", and I'll try to keep this in sync. :)
I had issues with this for hours, but it is actually quite simple.
I downloaded the class-dump 3.3 version from CodeTheCode, Unzipped the file and copied the class-dump file to a directory. In my case the directory was /opt/theos/bin.
In terminal, I then CD to that directory using cd /opt/theos/bin
To run the class dump command line utility it is as simple as this;
./class-dump
Obviously you then need to give it it's arguments, so in my case I was using it do dump the iOS headers from the frameworks, so I used;
./class-dump -H /Developer/Library/iPhoneSimulator4.3.sdk/Frameworks/UIKit.framework/UIKit -o ~/Desktop/UIKit
Obviously I am not sure what you are using it for, but in the example above I am telling class-dump to dump header files from the directory given, and output them to /Desktop/UIKit.
The theory is carried throughout.
Have you tried getting the binary from http://www.codethecode.com/projects/class-dump/?