Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm currently using JSDoc Toolkit to document my code, but it doesn't quite fit - namely, it seem to struggle with describing namespaces properly. Say you have two simple classes in each their files:
lib/database/foo.js:
/** #class */
function Foo(...) {...}
/** #function ... */
Foo.prototype.init(..., cb) { return cb(null, ...); };
module.exports = foo;
And then something inherited lib/database/bar.js:
var Foo = require('./foo');
/**
* #class
* #augments Foo
*/
function Bar(....) {...}
util.inherits(Bar, Foo);
Bar.prototype.moreInit(..., cb) { return cb(null, ...); };
In the generated documentation, this is output simply as Foo and Bar, without the leading database (or lib.database), which are quite necessary when you don't have everything in a global scope.
I've tried throwing #namespace database and #name database.Foo at it, but it doesn't turn out nice.
Any ideas for making JSDoc output something more suitable, or some entirely different tool that works better with Node.js? (I looked briefly at Natural Docs, JSDuck and breezed over quite a few others that looked quite obsolete...)
JSDoc is a port of JavaDoc. So basically the documentation assumes classical OOP and that's not suited to JavaScript.
Personally I would recommend using docco to annotate your source code. Examples of it can be found for underscore, backbone, docco.
A good alternative to docco is groc
As for an actual API documentation, I personally find auto generated documentation from comments just does not work for JavaScript and recommend you hand-write your API documentation.
Examples would be underscore API, Express API, nodejs API, socket.io docs
Similar StackOverFlow questions
Generating Javascript documentation
YUIDoc is a Node.js application that generates API documentation from comments in source, using a syntax similar to tools like Javadoc and Doxygen. YUIDoc provides:
Live previews. YUIDoc includes a standalone doc server, making it trivial to preview your docs as you write.
Modern markup. YUIDoc's generated documentation is an attractive, functional web application with real URLs and graceful fallbacks for spiders and other agents that can't run JavaScript.
Wide language support. YUIDoc was originally designed for the YUI project, but it is not tied to any particular library or programming language. You can use it with any language that supports /* */ comment blocks.
NOTE: Dox no longer outputs HTML, but a blob of JSON describing the parsed code. This means the code below doesn't work terribly well any more...
We ended up using Dox for now. It is a lot like docco, that Raynos mentions, but thows all of it in one bit HTML-file for output.
We hacked this into our makefiles:
JS_FILES := $(shell find lib/ -type f -name \*.js | grep -v 3rdparty)
#Add node_modules/*/bin/ to path:
#Ugly 'subst' hack: Check the Make Manual section 8.1 - Function Call Syntax
NPM_BINS:=$(subst bin node,bin:node,$(shell find node_modules/ -name bin -type d))
ifneq ($(NPM_BINS),)
PATH:=${NPM_BINS}:${PATH}
endif
.PHONY: doc lint test
doc: doc/index.html
doc/index.html: $(JS_FILES)
#mkdir -p doc
dox --title "Project Name" $^ > $#
It is not the prettiest or most efficient documentation ever made (and dox has quite a few minor bugs) - but I find it work rather well, at least for minor projects.
Sorry, I was not on StackExchange a year ago, but I believe the answer to your original question is to use the #memberOf tag:
/** #namespace */
database = {};
/**
* #class
* #memberOf database
*/
function Foo() { ... };
http://code.google.com/p/jsdoc-toolkit/wiki/TagMemberOf
This tag may or may not have existed when you asked your question.
Found a really nice solution for the problem: doxx.
It uses dox as mentioned above and converts this to nice HTML afterwards. Has a nice usage and worked great for me.
https://github.com/FGRibreau/doxx
I work with JSDoc and is very efficient, in addition to easy, but when projects have many alternate libraries are quite complicated development. I found Groc a very good tool based on Docco and works with other languages like: Python, Ruby, C + +, among others...
Furthermore Groc working with Markdown in GitHub which can be much more efficient when working with git as version control. Further helps assemble pages for publishing on GitHub.
You can also use the task manager GruntJS through grunt-groc example:
Install package:
npm install grunt-groc --save-dev
configure in your task file:
grunt.loadNpmTasks('grunt-groc');
And the config task:
// Project configuration.
grunt.initConfig({
groc: {
coffeescript: [
"coffee/*.coffee", "README.md"
],
options: {
"out": "doc/"
}
}
});
For run task:
grunt.registerTask('doc', ['groc'])
Related
Max, I want to update my extension to the new format, but I am running into issues with placement of custom code. It seems that the extension framework has been updated a lot since I added an extension 4 years ago. Is there a way to get better documentation on getting started with adding a extension? I am happy to help write up the documentation if you can help answer some questions that I think would help get people started. Let me know.
The only thing that really changed is that the scaffolder creates a webpack project for you. The extension registering procedure is the same: http://js.cytoscape.org/#extensions/api
For example, cytoscape( 'collection', 'fooBar', function(){ return 'baz'; } ) registers eles.fooBar().
I guess the main thing is that there are a lot more files than what the previous scaffolder generated, so it might be harder to find things. The layout output has lots of files, because it creates a skeleton impl for each of the continuous case and the discrete case.
The scaffolder isn't strictly necessary. You could use another build system (or none at all) as long as you call cytoscape(). For example, if you only care about publishing to npm for people who use webpack/browserify/rollup, then you could just use cjs require('cytoscape') to pull in the peer dependency. Exporting a register function is nice if you want to allow the client to decide the order of extension registrations with cytoscape.use(extension) (or extension(cytoscape)).
You're right that there should be some more docs on the output of the scaffolder. Maybe a summary of the files would suffice. We could add a tutorial in the blog later if need be. Both the docs and the blog just use markdown, so the content could go in either place.
I was looking in and saw this comment in the indir implementation:
sub indir(Str() $path, $what, :$test = <r w>) {
my $newCWD := $*CWD.chdir($path,:$test);
$newCWD // $newCWD.throw;
{
my $*CWD = $newCWD; # temp doesn't work in core settings :-(
$what();
}
}
I thought this use of my was strange, which led to doc issue #1082 niggling about if my is actually lexical. I would have thought that temp would be more appropriate for user-level temporary changes to dynamic variables.
But now, I see this comment, but I'm not quite sure what it means. Is temp broken this deep? Is it not available here? Or is the comment just wrong?
If the comment is right, has this way of dealing with dynamic variables leaked up to the everyday programmer level because that is what some people have to do in the guts (and they got used to that?)
And, how low-level is this level really? It seems like all of Perl 6 should be available here.
Perhaps the comment in the source code would be less misleading if it was:
# temp $*CWD doesn't work in core settings (which we're in)
# my $*CWD = ... is a decent workaround in this case :)
It seems like all of Perl 6 should be available here.
Full Perl 6 must wait until after completion of compilation of the Perl 6 CORE setting. This corresponds to the Rakudo Perl 6 compiler's "core" src tree. This includes the code with the "# temp doesn't work in core settings :-(" comment.
To emphasize #raiph's point: in general, it's unreasonable to expect any particular Perl 6 feature implemented in rakudo to work at any given point in the CORE, because that's how we make the features available.
Developers working on the core have to be aware of this, and take it into account, for example, in ordering how the CORE is built, and which features are available at which point (and further, which features are more performant at a lower level, so the Perl 6 you see in CORE may not be idiomatic for several different reasons.)
I have a problem with requiring programmatically created modules for browserify.
var File = require("vinyl"),
browserify = require("browserify");
bundler = browserify();
bundler.require(new File({contents: new Buffer(...)}), {expose: "mymodule"});
bundler.bundle();
...
In the output file i have the content of buffer, but not exposed to "mymodule".
Does anybody used this case?
This was a bug in browserify, but was fixed with this patch: https://github.com/substack/node-browserify/pull/907
and your above code should work in version 6.0.1 and above
Since this question was posted James Halliday (substack, creator of browserify) has been hard at work coming up with the browserify handbook: https://github.com/substack/browserify-handbook
This resource is excellent. It goes really deep into how requires work and resolve dependencies in general. It is by far the best resource I have ever seen on the subject. Perhaps if you (or anyone else landing on this question with a similar problem) give it a read, the answer might jump out at you.
I know it's not a direct answer to your question, but I only recently found this resource, and I wish I knew it existed earlier.
For an introduction to browserify, I recommend watching this tagtree video: http://tagtree.tv/browserify-an-intro?share_code=uncoopered-inspirer
This question already has answers here:
How to create documentation for a c++ project?
(1 answer)
Objective-C Documentation Generators: HeaderDoc vs. Doxygen vs. AppleDoc
(3 answers)
Closed 8 years ago.
Is there some alternative way to document your own functions/methods/variables in objective-c?
Like XML documentation in C# and java doc in Java.
I would recommend you to use Doxygen. It is what we use internally at work and it works really well. The fact that you could then also use the same system for other languages is also an added bonus if you eventually come to need that.
There is a good guide for automating the generation of your Doxygen docs with your builds here: http://www.guidebee.biz/forum/viewthread.php?tid=168
There has been some development since the other answers have been posted.
AppleDoc has evolved and become quite nice. It creates doc pages in the style of Apple's own pages, which is what you are after if I interpret your question correctly.
Documentation of the comments format here.
I'm having a deja vu ;-) Anyway, it looks like Doxygen can handle Objective-C as well; I have not personally tried it though.
Good news everyone! Xcode 5 now has built-in support for DOxygen style comments. So, you can comment your methods like this:
/*!
* Provides an NSManagedObjectContext singleton appropriate for use on the main
* thread. If the context doesn't already exist it is created and bound to the
* persistent store coordinator for the application, otherwise the existing
* singleton contextis returned.
* \param someParameter You can even add parameters
* \returns The a shared NSManagedObjectContext for the application.
*/
+ (NSManagedObjectContext *)sharedContext;
Inline help will look like this:
Quick help will look like this:
And sidebar help will look like this:
Here's a handy code snippet you can add the your Xcode Code Snippet library to make method documentation simple:
/**
<#description#>
#param <#parameter#>
#returns <#retval#>
#exception <#throws#>
*/
Now, you can just type "doxy" and poof! You have your doxygen template.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We have a large collection of command-line utilities that we write ourselves and use frequently. At the moment, testing them is very cumbersome and consequently, we don't do as much testing as we aught to.
I am wondering if anyone can suggest good techniques or tools for doing a good job of this kind of thing.
This is UNIX.
I recommend structuring your command line tool's code so that the command line utility is a client to a library of functions and/or classes.
Rather than simply using std::cout to print output, have the libraries function take an ostream reference that defaults to std::cout. When you are testing, provide a std::stringstream to collect the output.
Finally, simply compare your utility's output with expected results using your favorite unit testing framework.
(I apologize for the C++ specific example... I'm sure there are ways to do similar things in other languages too).
You can write tests that resemble an interactive shell session using Cram. It has flexible test specification format that allows you to match output using Perl regex or shell-like wildcards. Cram will replay commands from the test, compare output to the reference, and report differences.
Aruba is a Cucumber extension for testing command line applications written in any programming language.
To use it, you will need ruby to run the tests, but the purpose of aruba is to provide a library of pre-defined step definitions so that you won't need to write any ruby code to make a workable test suite. (Though at some point you probably will want to write a bit of ruby to make a few custom steps.)
You can see a sophisticated example of a command line tool tested with aruba here: jingweno/gh
You should be able to call them from a shell script (batch file, on MS operating systems), redirect the output to a file, then scan the file programmatically to ensure that it has the correct output. I'm not aware of a testing framework that automates this for you, but it should be fairly straight forward to set it up yourself.
Bats (Bash Automated Testing System) by Sam Stephenson. It is tiny, written purely in shell and has a nice set of features.
Previously suggested Aruba looks interesting, but in some cases it might be quiet an overkill in terms of dependencies (ruby, cucumber)
I did a little bit of this (a loooong time ago hehe) using Expect to check that what happened was what I, umm, expected
I have developed a tool "Exactly"
https://github.com/emilkarlen/exactly
It executes the thing to test in a temporary sandbox directory.
The README contains a number of examples.
A test of a hypotethical program "classify-files-by-moving-to-appropriate-dir" can look like this:
[setup]
dir input
dir output/good
dir output/bad
file input/a.txt = <<EOF
GOOD contents
EOF
file input/b.txt = <<EOF
bad contents
EOF
[act]
classify-files-by-moving-to-appropriate-dir GOOD input/ output/
[assert]
dir-contents input empty
exists output/good/a.txt : type file
dir-contents output/good num-files == 1
exists output/bad/b.txt : type file
dir-contents output/bad num-files == 1
You can do this from a batch file oder windows scripting host.
But i promise to use a task scheduler like (http://www.splinterware.com/products/wincron.htm) or other free/professional software.
There you can easy copy/paste the commandline-parameters which you should vary on, when you wanna test your software for about many 100 times?!
You could use perl with Test::more library, which provides a great framework for testing CLIs.
Though primarily designed for unit testing, you could extend it to test user workflows.
Some of the methods:
# Various ways to say "ok"
ok($got eq $expected, $test_name);
is ($got, $expected, $test_name);
isnt($got, $expected, $test_name);
# Rather than print STDERR "# here's what went wrong\n"
diag("here's what went wrong");
like ($got, qr/expected/, $test_name);
unlike($got, qr/expected/, $test_name);
cmp_ok($got, '==', $expected, $test_name);
command-lineautomationtestingperl-testing