How do I keep Flow annotations intact through Babel/Webpack? - npm

I have a private NPM module for utility functions with Flow type annotations.
I develop in Node v7 and, prior to npm publish, I use Babel/Webpack to transform it to earlier Node versions to run in environments like AWS Lambda.
I use transform-flow-strip-types plugin for Babel to compile, but as I understand it, that means I lose static type checking of my exported functions when I import the module into another project.
I tried babel-plugin-syntax-flow, but it throws unexpected token errors, so I'm assuming this isn't its intended use.
Can I transform my src/ with Babe while keeping Flow types intact?
The type annotations are simple (string, number, mostly), so I'd like to avoid writing typedefs to export with every function.

I came across an article explaining how to achieve what you are describing:
http://javascriptplayground.com/blog/2017/01/npm-flowjs-javascript/
By publishing both javascript where flow type is stripped together with the original flow typed file, you will get proper flow type checking when using your library.
This is achieved by publishing the original file as file.js.flow, and the babelified file as file.js.

Related

ADTF SDK: import manifest AND handle it

I'm trying to run a full ADTF configuration from my own C++ command-line application using the ADTF SDK. ADTF version: 2.9.1 (pretty old).
Here's what I have (want) to do:
Load manifest file
Load globals-xml
Load config-xml
2 & 3 are done, using the session-manager service - see ISessionManager interface: https://support.digitalwerk.net/adtf/v2/adtf_sdk_html_docs/classadtf_1_1_i_session_manager.html , functions LoadGlobalsFromFile & LoadConfigFromFile.
The problem is that I don't know how to do point 1: currently, instead of loading a manifest, I manually load the list of services myself using _runtime->RegisterPlugin, _runtime->CreateInstance and _runtime->RegisterObject.
What I've managed to do is to load only the namespace service and use the INamespace interface which has a method for loading manifest files: https://support.digitalwerk.net/adtf/v2/adtf_sdk_html_docs/classadtf_1_1_i_namespace.html - see ImportFile with ui32ImportFlags = CF_IMPORT_MANIFEST.
But this only loads the manifest settings into the namespace, it doesn't actually instantiate the services. I could do it manually, by:
Do _runtime->RegisterPlugin for every url under
root/plugins/ in the namespace
Do _runtime->CreateInstance for every objectid under
root/services/ in the namespace
But I want this to be more robust and I'm hoping there's already a service that handles the populated namespace subsequently and does these actions. Is there such a service?
Note: if you know how this could be done in ADTF3 that might also be of help for me, so don't hesitate to answer/comment
UPDATE
See "Flow of the system" on this page: https://support.digitalwerk.net/adtf/v2/adtf_sdk_html_docs/page_service_layer.html
Apparently the runtime instance itself handles the manifest file (see run-levels shutdown & kernel) but I don't know how I'm supposed to tell it where it is.
I've tried setting the command-line arguments to be count = 2 and the 2nd = manifest file path when instantiating cRuntime. It doesn't work :).
In ADTF3 you can just use the supplied cADTFSystem class to initiate an ADTF system and then use the ISessionManager interface to load a session of your choice.
Found the answer, not exactly what I expected though. I tried debugging adtf_runtime.exe to find out what arguments it passes to cRuntime.
The result is indeed similar to what I've suspected (and actually tried):
arg1 = adtf_runtime.exe (argv[0] in adtf_runtime)
arg2 = full path to manifest file (e.g. $(ADTF_DIR)\bin\adtf_devenv.manifest)
arg3 = basename of manifest file, without extension (e.g. "adtf_devenv")
While this suggested that cRuntime indeed is responsible with loading and handling the manifest, it turned out to be NOT quite so, passing the same arguments to it did not do the job. The answer came when I noticed that adtf_runtime.exe was actually using an extension of cRuntime called cRuntimeEx which is NOT part of the SDK (at least I haven't found it).
This class IS among the exported symbols of the ADTF SDK library, i.e. a "dumpbin /symbols adtfsdk_290.lib" renders at some point:
public: __cdecl adtf::cRuntimeEx::cRuntimeEx(int,char const * *
const,class ucom::IException * *)
but it is NOT part of the SDK (you won't find a header file defining it).
Among its methods you'll also find this:
protected: long __cdecl adtf::cRuntimeEx::LoadManifest(class adtf_util::cString const &,class std::set,class std::allocator > *,class ucom::IException * *)
Voila. And thus, unfortunately, I cannot achieve what I wanted in a robust fashion. :)
I ended up manually implementing the manifest-loading logic, since cRuntimeEx is not made available within the SDK. Something along these lines:
Use a cDOM instance to load the manifest file
Call FindNodes("/adtf:manifest/environment/variable") to find the environment-variables that need to be set and set them using "cSystem::SetEnvVariable"
Call FindNodes("/adtf:manifest/dependencies/platform") to find library dependencies and use cDynamicLinkage::Load to load the libraries that target the current platform (win32/linux)
Call FindNodes("/adtf:manifest/plugins/plugin") to find the services to be loaded using _runtime->RegisterPlugin (you may also handle "optional" attribute)
Call FindNodes("/adtf:manifest/services/service") to find the services that need to be created using _runtime->CreateInstance and _runtime->RegisterObject (you may also handle "optional" attribute)
And, finally, call FindNodes("/adtf:manifest/manifests/manifest") to (recursively) load child-manifests (you may also handle "optional" attribute)
The only thing you need to do is start the adtf launcher with the meta files (manifest. This works for adtf 2 as well as for adtf 3. It can be done (console) application. If you also want to do a little bit more in adtf 3, you can use adtf control instead of adtf launcher with its scripting interface (see the scripts under examples)

atlassian-plugin.xml contains a definition of component-import. This is not allowed when Atlassian-Plugin-Key is set

This is what I get when I run atlas-create-jira-plugin followed by atlas-create-jira-plugin-module selecting option 1: Component Import.
The problem is that all tutorial examples appear to have plugin descriptor generated by old SDK version (that won't deploy with newer versions of SDK/Jira at all), which do not feature Atlassian-Plugin-Key, so I can't find my way to import a component.
I'm using SDK 6.2.3 and Jira 7.1.1.
Any hint - how to get this sorted out?
anonymous is correct. The old way of doing things was to to put the <component-import> tag in your atlassian-plugin.xml. The new way and also recommended is to use Atlassian Spring Scanner. When you create your add-on using atlas-jira-create-plugin and your pom.xml has the <Atlassian-Plugin-Key> tag and the dependencies atlassian-spring-scanner-annotation and atlassian-spring-scanner-runtime then you are using the new way.
If you have both the dependencies, you are using Atlassian Spring Scanner version 1.x. If you only have atlassian-spring-scanner-annotation then you are using version 2.x.
You don't have to omit/comment out Atlassian-Plugin-Key in your pom.xml and you don't have to put component-import in your atlassian-plugin.xml.
For example, you want to add licensing for your add-on and need to import the component PluginLicenseManager. You just go straight to the code and your constructor might look like this:
#Autowired
public MyMacro(#ComponentImport PluginLicenseManager licenseManager) {
this.licenseManager = licenseManager;
}
And your class like this:
#Scanned
public class MyMacro implements Macro {
If memory serves me right, be sure to check for null because sometimes Atlassian Spring Scanner can't inject a component. I think on version 1, writing an #EventListener, it could not inject a ConversionContext. But when writing a Macro, it was able to inject a ConversionContext.
According to
https://bitbucket.org/atlassian/atlassian-spring-scanner
component-import is not needed. You can replace it by #ComponentImport annotation in your Java.
Found answer here: https://developer.atlassian.com/docs/advanced-topics/configuration-of-instructions-in-atlassian-plugins
It looks like I've somehow been missing that Atlassian-Plugin-Key can be omitted, and it must be done when you need to import components.
This key just tells spring not to 'transform' plugin's Spring configuration, which must happen as part of components import process..

Mule APIKit and multiple RAMLs

It is possible using multiple RAML files in one APIKit Mule Project?
Let's say I have two functions /api/func1 and /api/func2.
Each of the functions is defined in its own raml - func1.raml and func2.raml.
I've generated a flow in Anypoint for the first function using the APIKit wizard. It's working ok.
Now, I'm trying generating a flow for the second function. The flow is generated with no errors. However, it just doesn't work. I've tried fixing the URLs, bindings, configurations and nothing really helps.
Note, that I don't wanna bind both the RAMLs into one file. The reason is that it's easier to develop/maintain the functions separately.
The only solution I can see is to define two separate projects. But this is not really what I'd like to do.
So, looking for an advice of how to deal with this situation.
Thanks,
Ok, actually, it's possible.
What you need to do is make the "Path"es different in the HTTP connectors for the flows generated.
The apikit wizard generates the default path that looks like this: "/api/*".
So, Mule generates an error when attempting to deploy the app. What you need to do is changing paths to "/api/func1/" and "/api/func2/"
You can continue having a single RAML file and make external references to simplify your raml, here is an example:
#%RAML 0.8
title: Eventlog API
version: 1.0
baseUri: http://eventlog.example.org/{version}
schemas:
- eventJson: !include eventSchema.json
eventListJson: !include eventlistSchema.json
Also going by strict REST design it is recommended to have a resource related details maintained in a single RAML file.
Optionally you may edit the url's to resolve any context related conflict.

Adding custom configuration in config.yml in Symfony 2.1

I want to do custom configuration parameters in config.yml
Example:
In config.yml file
security_enhancement:
authentication:true
authorization:true
In same format like swiftmailer configuration etc.I'm not getting idea how to define.
I'm getting error like:
1/2 ParseException: Unable to parse in "\/var\/www\/demo\/app\/config\/config.yml" at line 217 (near "authentication:true").
Am I missing something here? Is it necessary to add in depending injection extension file? .Actually I want to enable disable authentication,authorization execution during dev mode which is implemented in listener which can be done using config_dev.yml . I don't want to add under Parameters. Any suggestions?
As you've rightly theorised, you do indeed need to add in DI extension files, assuming your configuration relates to particular bundles (which it almost certain will).
Whilst parameters can simply be defined at will, configuration features hierarchical structure and validation.
Usually, configuration is used to in turn, define parameters, but it allows for the values to be parsed and validated prior to their instantiation, so that bundle writers can provide better guidance as to how their services can be used (with meaningful errors), and trust the values that are being passed into them.
A decent read on how to get started with config component can be found in the Symfony2 docs: defining and processing configuration files with the config component.

Code looking for modules in the wrong place

I have created a multi-layer build using build.dojotoolkit.org (my first attempt) with 3 layers: dojo.js, dojox.js, dijit.js. Each js file is uploaded in its own folder (dojo,dojox,dijit).
When I run the code, I would expect it to look in dijit.js to get the form modules like dijit.form.TextBox. But instead it tries to load dijit/form/TextBox.js and of course ends up with a 404 error.
What am I doing wrong?
The files are here if it helps:
http://usermanagedsolutions.com/Demos/Pages
Manually include each layer in a script tag on the page.
<script src="path/to/dojo.js" />
<script src="path/to/dojox.js" />
<script src="path/to/dijit.js" />
This will make available all modules that you have defined in the build. When you require the text box, Dojo will see that it has the code and will not make the XHR call.
Even though you do not have the intention of using the individual files, you may want to put them on the server as well. This way if someone forgets to add the file to the build, the penalty incurred is an xhr request, as opposed to a javascript error.
Re: AMD
When you include your layers in the manner that I described above, you are not loading all the modules that you included the build - you are just making the define functions available without having to make xhr requests.
If you look at the js file that is output from the build, the file contains a map of the module path to a function that when called will define the module.
So when you write the following code
require(["dijit/form/TextBox"], function(TextBox){
...
});
AMD will first determine if dijit/form/TextBox has already been defined. If so it will just take the object and execute the callback.
If the module hasn't already been defined, then AMD will look in it's cache to see if the define code is available. When you include your script files, you are providing a cache of define functions. AMD finds the code to define the module. It calls this define function and the result is the object that is passed into the callback. Subsequent requires for dijit/form/TextBox will also use this object as described above.
If the module hasn't already been defined and AMD does not find the define function in its cache, then AMD will make an XHR request back to the server to try to locate the specific module code. The result of the XHR call should provide the define function. AMD will call the function and use the result as the object to pass into the callback. Again, subsequent requires for dijit/form/TextBox will also use this object.
The Dojo build, provides the ability to 1) minify the code and 2) combine it into fewer files that need to be requested from the server.
AMD allows you to write code that can run in either environment (using built files or the individual files) without having to make modifications.