Link to external types in ExDoc - documentation

I recently created my first Hex Package; Ecto.Rut and I'm now working on its documentation. Since it uses Ecto.Repo at the back and returns Ecto.Schema and Ecto.Changeset types, I wanted to link them in the #specs.
Internal and Elixir core types (such as Keyword.t) are automatically linked, but ex_doc doesn't link external types defined in the Ecto modules. How do I make that happen?
I've currently tried specifying the complete module name in the #spec but that doesn't work:
#callback all(opts :: Keyword.t) :: [Ecto.Schema.t] | no_return

After some discussion on ElixirForum, Jose added this feature. With ExDoc v0.14.2 and onwards, it supports auto-linking for external dependency modules.
From the Github Page:
By referring to a module, function, type or callback from any of your dependencies, such as MyDep, ExDoc will automatically link to that dependency documentation on hexdocs.pm (the link can be configured with the :deps option in your mix.exs)
This means, simply mentioning the complete module name would autolink types, callbacks, modules and methods. So, by updating to the latest ExDoc, my existing code now auto-links:
#callback all(opts :: Keyword.t) :: [Ecto.Schema.t] | no_return

Related

ADTF SDK: import manifest AND handle it

I'm trying to run a full ADTF configuration from my own C++ command-line application using the ADTF SDK. ADTF version: 2.9.1 (pretty old).
Here's what I have (want) to do:
Load manifest file
Load globals-xml
Load config-xml
2 & 3 are done, using the session-manager service - see ISessionManager interface: https://support.digitalwerk.net/adtf/v2/adtf_sdk_html_docs/classadtf_1_1_i_session_manager.html , functions LoadGlobalsFromFile & LoadConfigFromFile.
The problem is that I don't know how to do point 1: currently, instead of loading a manifest, I manually load the list of services myself using _runtime->RegisterPlugin, _runtime->CreateInstance and _runtime->RegisterObject.
What I've managed to do is to load only the namespace service and use the INamespace interface which has a method for loading manifest files: https://support.digitalwerk.net/adtf/v2/adtf_sdk_html_docs/classadtf_1_1_i_namespace.html - see ImportFile with ui32ImportFlags = CF_IMPORT_MANIFEST.
But this only loads the manifest settings into the namespace, it doesn't actually instantiate the services. I could do it manually, by:
Do _runtime->RegisterPlugin for every url under
root/plugins/ in the namespace
Do _runtime->CreateInstance for every objectid under
root/services/ in the namespace
But I want this to be more robust and I'm hoping there's already a service that handles the populated namespace subsequently and does these actions. Is there such a service?
Note: if you know how this could be done in ADTF3 that might also be of help for me, so don't hesitate to answer/comment
UPDATE
See "Flow of the system" on this page: https://support.digitalwerk.net/adtf/v2/adtf_sdk_html_docs/page_service_layer.html
Apparently the runtime instance itself handles the manifest file (see run-levels shutdown & kernel) but I don't know how I'm supposed to tell it where it is.
I've tried setting the command-line arguments to be count = 2 and the 2nd = manifest file path when instantiating cRuntime. It doesn't work :).
In ADTF3 you can just use the supplied cADTFSystem class to initiate an ADTF system and then use the ISessionManager interface to load a session of your choice.
Found the answer, not exactly what I expected though. I tried debugging adtf_runtime.exe to find out what arguments it passes to cRuntime.
The result is indeed similar to what I've suspected (and actually tried):
arg1 = adtf_runtime.exe (argv[0] in adtf_runtime)
arg2 = full path to manifest file (e.g. $(ADTF_DIR)\bin\adtf_devenv.manifest)
arg3 = basename of manifest file, without extension (e.g. "adtf_devenv")
While this suggested that cRuntime indeed is responsible with loading and handling the manifest, it turned out to be NOT quite so, passing the same arguments to it did not do the job. The answer came when I noticed that adtf_runtime.exe was actually using an extension of cRuntime called cRuntimeEx which is NOT part of the SDK (at least I haven't found it).
This class IS among the exported symbols of the ADTF SDK library, i.e. a "dumpbin /symbols adtfsdk_290.lib" renders at some point:
public: __cdecl adtf::cRuntimeEx::cRuntimeEx(int,char const * *
const,class ucom::IException * *)
but it is NOT part of the SDK (you won't find a header file defining it).
Among its methods you'll also find this:
protected: long __cdecl adtf::cRuntimeEx::LoadManifest(class adtf_util::cString const &,class std::set,class std::allocator > *,class ucom::IException * *)
Voila. And thus, unfortunately, I cannot achieve what I wanted in a robust fashion. :)
I ended up manually implementing the manifest-loading logic, since cRuntimeEx is not made available within the SDK. Something along these lines:
Use a cDOM instance to load the manifest file
Call FindNodes("/adtf:manifest/environment/variable") to find the environment-variables that need to be set and set them using "cSystem::SetEnvVariable"
Call FindNodes("/adtf:manifest/dependencies/platform") to find library dependencies and use cDynamicLinkage::Load to load the libraries that target the current platform (win32/linux)
Call FindNodes("/adtf:manifest/plugins/plugin") to find the services to be loaded using _runtime->RegisterPlugin (you may also handle "optional" attribute)
Call FindNodes("/adtf:manifest/services/service") to find the services that need to be created using _runtime->CreateInstance and _runtime->RegisterObject (you may also handle "optional" attribute)
And, finally, call FindNodes("/adtf:manifest/manifests/manifest") to (recursively) load child-manifests (you may also handle "optional" attribute)
The only thing you need to do is start the adtf launcher with the meta files (manifest. This works for adtf 2 as well as for adtf 3. It can be done (console) application. If you also want to do a little bit more in adtf 3, you can use adtf control instead of adtf launcher with its scripting interface (see the scripts under examples)

How do I keep Flow annotations intact through Babel/Webpack?

I have a private NPM module for utility functions with Flow type annotations.
I develop in Node v7 and, prior to npm publish, I use Babel/Webpack to transform it to earlier Node versions to run in environments like AWS Lambda.
I use transform-flow-strip-types plugin for Babel to compile, but as I understand it, that means I lose static type checking of my exported functions when I import the module into another project.
I tried babel-plugin-syntax-flow, but it throws unexpected token errors, so I'm assuming this isn't its intended use.
Can I transform my src/ with Babe while keeping Flow types intact?
The type annotations are simple (string, number, mostly), so I'd like to avoid writing typedefs to export with every function.
I came across an article explaining how to achieve what you are describing:
http://javascriptplayground.com/blog/2017/01/npm-flowjs-javascript/
By publishing both javascript where flow type is stripped together with the original flow typed file, you will get proper flow type checking when using your library.
This is achieved by publishing the original file as file.js.flow, and the babelified file as file.js.

SuiteCRM developing a custom module

I am trying to build a custom module based on the 'basic' template, with extra fields without using the module builder.
I have looked trough the SugarCRM 6.5 documentation, bought the book SuiteCRM for Developers and looked trough the sources of existing modules, but I still can not figure out how to put a working module together.
Does a minimal module template exists anywhere? What I am looking for is a fully working module with one extra field, which can be deployed on a SuiteCRM instance. I can take it from there.
There's no minimal module template that I know of, you may want to consider creating a test module through module builder and exporting that to see what the parts are.
Usually though modules have the following files. Example uses the module ABC_Sport.
custom/Extension/application/Ext/Include/ABC_Sport.php
This adds the module to the module list and adds the beans. I.e.
$beanList['ABC_Sport'] = 'ABC_Sport';
$beanFiles['ABC_Sport'] = 'modules/ABC_Sport/ABC_Sport.php';
$moduleList[] = 'ABC_Sport';
custom/Extension/application/Ext/Include/en_us.ABC_Sport.php
(Note you may want to add files for different languages).
Next up you'll need to create the bean file in
modules/ABC_Sport/ABC_Sport.php
and the vardefs in
modules/ABC_Sport/vardefs.php
I'm not totally sure if the metadata files are required or not but you'll also likely want to add the editviewdefs,detailviewdefs and listviewdefs.

Application modules with Pyramid

I'm creating a workflow app with pyramid and i'm searching how to make the application modulable : meaning create a core app with sqlalchemy models, base forms with wtforms, and some base templates with mako.
The basic structure of the "Core" app is:
App_Core/core.ini
/setup.py
/...
/App_Core/
/__init__.py
/models.py
/forms.py
/utils.py
/templates/
/templates/base.mako...
/static/
/static/staticfiles...
My goal is to create 1 application per workflow which will be included in the Core app : it seems possible to do that via the includeme function provided with pyramid.
I want to include each workflow via the core.ini file, for example:
pyramid.includes =
workflow_app1
workflow_app2
workflow_app3
...
I defined an new app called workflow_app1 with the following structure:
worflow_app1/
/setup.py
/...
/workflow_app1/
/__init__.py
/models.py
/forms.py
/views.py
/templates/
/templates/workflow_app1.mako
/...
And the _init_.py file will contain the includeme function and will define new routes:
def includeme(config):
config.add_route('route1', '/route1/')
config.add_route('route2', '/route2/')
config.scan()
When i'm writing a view for the worflow_app1, i'm rendering to a template included with that app, but when i'm calling it from the core app, it can't render the template and gives the following error:
TopLevelLookupException: Cant locate template for uri 'workflow-app1.mako'
This error quite logical cause the mako.directories directive is given with the path App_Core_PATH/templates so my template should be in the same folder.
Question1:
Is it possible to make mako searching in each folder of modules the wanted templates?
Question2:
Is it possible to make the workflow-app1.mako inheriting of the base.mako from the core app?
Thanks by advance for your answer.
The solution that I would recommend is switching to asset specs for your templates. They are explicit, allow overriding, and provide better control over your template hierarchy. This means that you would stop using mako.directories and instead use 'workflow_app1:templates/workflow_app1.mako' in your inherits or include or renderer arguments. Given this, it's obvious that you can inherit from your base.mako in your core app, whereas managing the mako.directories option is more difficult.
If you're deadset on mako.directories then you can add a line to it every time you add a package to pyramid.includes.
mako.directores =
App_Core:templates
workflow_app1:templates
workflow_app2:templates
Another option is to switch to jinja2, as its plugin has the ability to add search paths after the fact. Thus your included modules can config.add_jinja2_search_path(...) throwing themselves into the lookup order. Pyramid's mako integration does not offer this option right now.

Code looking for modules in the wrong place

I have created a multi-layer build using build.dojotoolkit.org (my first attempt) with 3 layers: dojo.js, dojox.js, dijit.js. Each js file is uploaded in its own folder (dojo,dojox,dijit).
When I run the code, I would expect it to look in dijit.js to get the form modules like dijit.form.TextBox. But instead it tries to load dijit/form/TextBox.js and of course ends up with a 404 error.
What am I doing wrong?
The files are here if it helps:
http://usermanagedsolutions.com/Demos/Pages
Manually include each layer in a script tag on the page.
<script src="path/to/dojo.js" />
<script src="path/to/dojox.js" />
<script src="path/to/dijit.js" />
This will make available all modules that you have defined in the build. When you require the text box, Dojo will see that it has the code and will not make the XHR call.
Even though you do not have the intention of using the individual files, you may want to put them on the server as well. This way if someone forgets to add the file to the build, the penalty incurred is an xhr request, as opposed to a javascript error.
Re: AMD
When you include your layers in the manner that I described above, you are not loading all the modules that you included the build - you are just making the define functions available without having to make xhr requests.
If you look at the js file that is output from the build, the file contains a map of the module path to a function that when called will define the module.
So when you write the following code
require(["dijit/form/TextBox"], function(TextBox){
...
});
AMD will first determine if dijit/form/TextBox has already been defined. If so it will just take the object and execute the callback.
If the module hasn't already been defined, then AMD will look in it's cache to see if the define code is available. When you include your script files, you are providing a cache of define functions. AMD finds the code to define the module. It calls this define function and the result is the object that is passed into the callback. Subsequent requires for dijit/form/TextBox will also use this object as described above.
If the module hasn't already been defined and AMD does not find the define function in its cache, then AMD will make an XHR request back to the server to try to locate the specific module code. The result of the XHR call should provide the define function. AMD will call the function and use the result as the object to pass into the callback. Again, subsequent requires for dijit/form/TextBox will also use this object.
The Dojo build, provides the ability to 1) minify the code and 2) combine it into fewer files that need to be requested from the server.
AMD allows you to write code that can run in either environment (using built files or the individual files) without having to make modifications.