AngularAMD: the app depends on services but services depend on the app - angular-amd

I am clearly missing something very basic.
The instructions are to create an app, like this:
define(['angularAMD'], function (angularAMD) {
var app = angular.module(app_name, ['webapp']);
... // Setup app here. E.g.: run .config with $routeProvider
return angularAMD.bootstrap(app);
});
And then create subsequent items like this:
define(['app'], function (app) {
app.factory('Pictures', function (...) {
...
});
});
And there is this helpful line:
Any subsequent module definitions would simply need to require app to create the desired AngularJS services
Well that's just great for subsequent module definitions, but app.config and app.run need lots of prerequisite modules that I am supposed to create -- as would any application beyond the level of a toy. So there is obviously some simple solution that I am missing. How do I create services that the app depends on?

You can simply use 'angularAMD' injection to create services.
For example,
define(['angularAMD'], function (angularAMD) {
angularAMD.service('LoggerService',['$log',function($log){
return function(msg){
$log.log('message:', msg);
}
}]);
});
The services created using this method are available before the application is bootstrapped. Hence app can depend on these services.
More similar code can be found at angular-AMD sample app.

Related

Can I determine `IsDevelopment` from `IWebJobsBuilder`

Very much an XY problem, but I'm interested in the underlying answer too.
See bottom for XY context.
I'm in a .NET Core 3 AzureFunctions (v3) App project.
This code makes my question fairly clear, I think:
namespace MyProj.Functions
{
internal class CustomStartup : IWebJobsStartup
{
public void Configure(IWebJobsBuilder builder)
{
var isDevelopment = true; //Can I correctly populate this, such that it's true only for local Dev?
if(isDevelopment)
{
// Do stuff I wouldn't want to do in Prod, or on CI...
}
}
}
}
XY Context:
I have set up Swagger/Swashbuckle for my Function, and ideally I want it to auto-open the swagger page when I start the Function, locally.
On an API project this is trivial to do in Project Properties, but a Functions csproj doesn't have the option to start a web page "onDebug"; that whole page of project Properties is greyed out.
The above is the context in which I'm calling builder.AddSwashBuckle(Assembly.GetExecutingAssembly()); and I've added a call to Diagnostics.Process to start a webpage during Startup. This works just fine for me.
I've currently got that behind a [Conditional("DEBUG")] flag, but I'd like it to be more constrained if possible. Definitely open to other solutions, but I haven't been able to find any so ...
While I am not completely sure that it is possible in azure functions I think that setting the ASPNETCORE_ENVIRONMENT application setting as described in https://learn.microsoft.com/en-us/azure/azure-functions/functions-how-to-use-azure-function-app-settings should allow you to get whether the environment is set as production or development by injecting a IHostEnvironment dependency and checking
.IsDevelopment()
on the injected dependency.

How to use hapi-swaggered without a running server

I have a working hapi service, complete with hapi-swaggered and hapi-swaggered-ui. This is useful for many cases, but I want to add a build step to my CI which will be able to get the JSON generated by hapi-swaggered (which, if changed, would get compiled that into an .Net assembly that gets stored in a local proget).
I know that if I really wanted to, on my build server, I could start an instance of my server, curl to localhost:3000/swagger, kill the server, and proceed, but that seems a little risky (i.e., what if I have two builds running at the same time?).
Has anyone developed a way to directly call the hapi-swaggered API to get the raw JSON?
Well, that didn't take too much longer, but I think I found one solution. In this case, internals is my server. It does not auto-start if its loaded (required'ed) from another file, and the compose method is exposed to use hapi's Glue.compose to assemble the service. It seems that I can then use the inject method to simulate a call.
'use strict';
var internals = require('./');
internals.compose(function(err, server) {
server.inject({ method: 'GET', url: '/swagger' }, function (response) {
console.log(JSON.stringify(response.result));
process.exit();
});
});
If there's anything that I'm missing about this technique, I'd like to hear about it.

How to use Stub object with tinytest and meteorjs?

This weekend i tryed to test a package "A" from my meteor app.
This package depends on another package "B" that defines all collections. So the package "B" expose all required collections.
The package "A" expose a main object that have some methods that use the collections exposed in "B".
I want to replace some collections by a code like this :
myCol = {
"findOne": return {_id: 1, "name": ben}
}
But it fails. This code is ok from tinytest.add code, but in the methods of the package "A", it still uses the original Collection variables. I've seen in the build folder that everything is re-written by the build system, so i wonder what is the best way to test my code without depending on those Collection variables.
I have some ideas like storing those variables in a main object that has get/set methods. It might allow me to change everything when i do test.
Thanks for help
Here is the sample app : https://github.com/MeteorLyon/tutorial-package-dependancy-testing
Follow the README.md to run different test.
If you find a solution it's great.
If you are looking for stubs, I'd highly recommend using sinon. Specifically, have a look at the stubs and the sandbox portions of the docs. You can find atmosphere packages here. Here's a quick example:
Tinytest.add('my test', sinon.test(function(test) {
// this is sandboxed stub - we are writing to a global object
// but it will be restored at the end of the test
test.stub(Meteor, 'userId', function() {
return USER_ID;
});
// let's do the same thing with a collection
test.stub(Posts, 'findOne', function() {
return {_id: 1, name: 'ben'};
});
var post = Posts.findOne();
test.equal(post.name, 'ben');
}));
Keep in mind that tinytest is an integration test framework, so you may get better tests by fully utilizing both package's APIs. With respect to testing collection interactions, we've found its better to not stub very much and just insert and cleanup as needed. But that's pretty general advice - there may be some specific reason why this can't work in your particular use case.

intern test not using configured registry requestProvider in dojo

I'm following this article to mock response in dojo.
My mocker is very similar to the one in the article except this:
registry.register(/\/testIntern/, function (url, options) {
return when({
value: "Hello World"
});
In my understanding, this should map to any request that contains "/testIntern" on the address.
My testcase is quite simple:
// similar to example
var testRest= new Rest("/testIntern", true);
testRest("").then(lang.hitch(this, function (data) {
assert.deepEqual("Hello World", data.value, "Expected 'Hello World', but got" + data.value);
}));
It really should be quite simple. But when I run this test, I got 404 Not Found. It looks like the REST call in the test doesn't try to use the mocking service. Why?
You are generally correct in your thought that registering a URL with dojo/request/registry should pass anything referencing that URL via dojo/request through your handler.
Unfortunately, dojo/store/JsonRest uses the dojo/_base/xhr module which uses dojo/request/xhr directly, not dojo/request. Any registrations created with dojo/request/registry (and any setting of defaultProvider) will unfortunately be lost on JsonRest.
You might want to have a look at dstore - its Rest store implements the same server requests as dojo/store/JsonRest but it uses dojo/request instead of being hard-coded to a specific provider. (dojo/request defaults to dojo/request/xhr in browsers anyway, but can be overridden via dojoConfig.requestProvider.) dstore contains adapters for translating between dstore's API and the dojo/store API, if you need to use it with widgets that operate with the latter.

Sharing code between Worklight Adapters

In most cases I've dealt with so far the Worklight Adapter implementation has been pretty trivial, just a few lines of JavaScript.
On the current project, using WL 5.0.6, we have several adapters, each with several procedures. Our particular backends require some common logic to set up requests and interpret responses. Seems ideal for refactoring common code to shared library, execpt that as far as I can see there's no "library" concept in the adapter environment unless we want to drop down into Java.
Are there any patterns for code-reuse between adapters?
I think you are right. There is currently no way of importing custom JavaScript libraries.
There is a way to include/load Javascript files in Mozilla Rhino engine by using the "load(xyz.js)" function, but this will make your Worklight adapter undeployable.
But I've noticed, that this will make your Worklight adapter undeployable. If you deploy a second *.js file within an adapter, you'll get the following error message:
Adapter deployment failed: Procedure 'getStories' is not implemented in the adapter's JavaScript file.
It seems like Worklight Server can only handle one JavaScript file per adapter.
I have shared some common functionality between adapters by implementing the functionality in Java code and including the jar file in the Worklight war file. This came in handy to invoke stored procs via JDBC that can handle multiple out parms and also retrieving PDF content from internal backend services. The jar needs to be in the lib dir of the worklight.war web app that the adapter will be deployed to.
Example of creating a java object in the adapter:
var parm = new org.apache.http.message.BasicNameValuePair("QBS_Reference_ID",refId);
One way to share JavaScript between adapters is to follow a pattern somewhat like this:
CommonAdapter-impl.js:
var commonObject = {
invokeBackend: function(input, options) {
// Do stuff to check/modify input & options
response = WL.Server.invokeHttp(input);
// Do stuff to check/modify response
return response;
}
}
getCommonObject: function() {
return commonObject;
}
NormalAdapter-impl.js:
function getSomeData(dataNumber) {
var input = {
method: 'get',
returnedContentType: 'json',
path: '/restservices/getsomedata',
}
return _getCommonObject().invokeBackend(input);
}
function _getCommonObject() {
var invocationData = {
adapter: 'CommonAdapter',
procedure: 'getCommonObject',
parameters: []
}
return WL.Server.invokeProcedure(invocationData);
}
In this particular case, the common adapter is being used to provide a "wrapper" function around WL.Server.invokeHttp, but it can be used to provide any other utility functions as well.
The reason for this pattern in particular is that it allows the WL.Server.invokeHttp to run in the context of the "calling" adapter, which means the endpoint (hostname, port number, etc.) specified in the calling adapter's .xml file will be used. It does mean that the short _getCommonObject() function has to be repeated in each "normal" adapter, but this is a fairly small piece of boilerplate.
Note that this pattern has been used with Worklight 6.1 - there is no guarantee it will work in future or past versions.