How to know if an adapter is running on MobileFirst Development Server - ibm-mobilefirst

Is there some way to know if an adapter is running in the MobileFirst Development Server or if it has been deployed to a full server?
--Update--
Specifically, I want to find out, from the adapter's code itself, if the adapter is being executed in a developer's machine or if it is being executed in WAS/Tomcat/non-development Liberty Profile.
I want to know this in order to be able to leave unprotected some adapter procedures intended for testing; this testing procedures would look similar to this
function testThisAdapter() {
if (isDevelopmentServer()) {
return doMyTestStuff();
} else {
return {isSuccess: false, errors: ['nice try']};
}
}
--Update--
This is what I am using with Idan's answer
function isDevelopmentServer() {
var clientRequest = WL.Server.getClientRequest();
var url = clientRequest.getRequestURI();
var pattern = /\/dev\/invoke/;
return pattern.test(url);
}

Update: When using the 6.3 (or earlier) Studio MoblieFirst Development Server, all adapter requests go through a development servlet. The request URL will contain a /dev/ in it: http://serverIp:10080/my-project-name/dev/invoke?adapter=my-adapter-name&procedure=my-procedure-name. See here: Endpoints of the MobileFirst Server production server
That's the only differentiator that I know of. I am not sure you can use that in your adapter code. Maybe in the client, if you'll somehow manage to retrieve this URL or validate its existence, then you could devise appropriate logic for the app.
See the following user documentation topic: Vitality queries for checking server health
Use IBM® Worklight® vitality queries to run a health check of your
server, and determine the vitality status of your server.
You generally use the IBM Worklight vitality queries from a load
balancer or from a monitoring app (for example, Patrol).
You can run vitality queries for the server as a whole, for a specific
adapter, for a specific app, or for a combination of. The following
table shows some examples of vitality queries.
For an adapter, the query would be: http://<server>:<port>/<publicWorkLightContext>/ws/rest/vitality?app=MyApp&adapter=MyAdapter
The user documentation topic contains more information and examples.

Related

Dart integration test with VM server and dartium browser

I'm making a library that implements both server and client parts that interacts between them via websockets:
Server use example (ran in CLI):
Server srv = await new Server("localhost:1234");
srv.onNewClientConnected.listen(print("client connected"));
Client use example (ran in browser):
Client cli = await new Cliente("localhost:1234");
cli.sendCommand(...);
(Just by creating the instances, the client should be connected and the server noticed about that connection.)
I'd like to know what would be the best way to test their interactions? Could I check both objects internals with that method?
I would like something like this:
test(".echo should receive same input from server", (){
cli.echo("message");
expect(srv.lastMessageReceived, equals("echo: message"));
expect(cli.lastResponseReceived, equals("echo: message"));
expect(srv.amountMessagesReceived, equals(1));
});
If I understand correctly, I'm guessing you are trying to encapsulate https://www.dartlang.org/dart-vm/dart-by-example#websockets into helpers so that you have only instances when connected. However both operations (server side binding/listening/upgrade, client side connection) is asynchronous so you will never reach the state you want by just creating the instances (or you will need an additional asynchronous methods to be notified). I would suggest creating asynchronous helpers.
Assuming you accept only one client in your server
Server server = await Server.accept("localhost:1234");
Client side:
Client client = await Client.connect("localhost:1234");
By doing so, you will have only server and client instances when connected
I like the https://pub.dartlang.org/packages/web_socket_channel package which provide a good abstraction and allow me to test my web socket client logic that will run in the browser in a simple io test.
As for testing recommendations, I personally start my web socket server in setUpAll and create my client in setUp and user a similar logic that you propose (don't forget the await though as you will need to wait for the echo response). Again the web_socket_channel package has some good testing example that you can look at (https://github.com/dart-lang/web_socket_channel/tree/master/test)

Simulate Access disable feature in Worklight , when worklight server itself is down.

I am trying show end users maintainence window such as "we are down please try later" and disable the application but my problem is what if my worklight server itself is down and not reachable and i cannot use the feature provided by worklight console,
Is there a way i make my app talk to a different server which returns back the below json data when a app is disabled , can i simulate this behaviour is this possible.
json recieved on access disabled in worklight :-
/*-secure-
{"WL-Authentication-Failure":{"wl_remoteDisableRealm":{"message”:”We are down, Please try again soon","downloadLink":null,"messageType":"BLOCK"}}}*/
I have some conceptual problems with this question.
Typically a production environment (simplified) would not consist of a single server serving your end-users... meaning, there would be a cluster of nodes, each node being a Worklight Server, and this cluster would be behind a load balancer that would direct the incoming requests. And so in a situation where a node is down for maintenance like in your scenario there would still be more servers able to serve - there would be no down time.
And thus at this point your suggestion to simulate a Remote Disable by sending it from another(?) Worklight Server seems not so much the correct path to take (it may even be simply wrong). Have you had this second Worklight Server, why wouldn't it just serve the apps business like usual? See again my first paragraph about clustering.
Now lets assume there is still a downtime, that affects all servers. The application's client logic should be able to handle failed connections to the Worklight Server. In such a case you should handle this in the WL.Client.connect()'s onFailure callback function to display a WL.SimpleDialog that looks just like a Remote Disable's dialog... or perhaps via the initOption.js's onConnectionFailure callback.
Bottom line: you cannot simulate the JSON that is sent back for the wl_RemoteDisable realm; it is part of a larger security mechanism.
Additionally though, perhaps a way to better handle maintenance mode on your server is to have the HTTP server return a specific HTTP status code, check for this code and display a proper message based on the returned HTTP status code.
To check for this code in a simple example:
Note: the getStatus method is available starting MobileFirst Platform Foundation 7.0 (formerly "Worklight").
function wlCommonInit(){
WL.Client.connect({onSuccess:success, onFailure:failure});
}
function success(response) {
// ...
}
function failure(response) {
if (response.getStatus() == "503") {
// site is down for maintenance - display a proper message.
} else if ...
}

Worklight Polling Adapter - Calling another Adapter

In Worklight 5.0.6, we have created an eventSource using the following:
WL.Server.createEventSource({
name: 'ReminderSource',
onUserSubscribe: 'userSubscribeFunc',
poll: {
interval: 86400,
onPoll: 'getReminders'
}
});
The getReminders procedure then calls other HTTP and SQL adapters to determine if we should send a Push Notification. When we deploy this to our Worklight server, we see the following error any time we try to call one of the procedures in another adapter:
The resource 'proc:tbl_member.getPreferences' should only be accessed
when authenticated in realm 'wl_antiXSRFRealm'.
We've tried using a mobileSecurityTest (which includes the wl_antiXSRFRealm) to protect the eventSource, but we get the same error. Is there a way to have our polling adapter procedure somehow "log in" to the antiXSRFRealm?
We can't make the other adapter procedures unprotected, because they do need to be protected.
antiXSRF is used for client-server cross scripting attack detection. It doesn't do too much for invocations between adapter procedures. Try creating a custom security test and adding only user realm there, no antiXSRF.

Re-route/Divert some WL.Client Adapter Invocation traffic to WL Server through different URL (for PCI payment and security requirements)?

Worklight 5.0.6.1
We are having a specific requirement from our client about using a PCI Appliance from Intel (http://info.intel.com/rs/intel/images/Intel_Expressway_Tokenization_Broker.pdf) to avoid a PCI Audit for the application and server.
Therefore, the Adapter calls that have something to do with payment data would need to go through this hardware appliance before hitting the worklight server. All other adapter calls should go to the worklight server directly (to not overload the appliance).
The idea is to have two different URLs but the same worklight server in the background. It is assumed that the calls through the appliance will be transparent for the worklight server, so worklight functionality should not be impacted.
My questions around this would be:
a Worklight best-practice for having two different URLs for the same worklight server and alternating those URLs from the client for Adapter invocations (only; not direct update or anything else, since we assume this is executed native)?
is it possible to dynamically overwrite the worklight server URL that is used for an adapter invocation through JavaScript code in the client code? e.g. overwrite a specific JS function that gets/returns the worklight URL from somewhere before the WL.Client AJAX adapter invocation?
We are also looking into having a load-balancer switch the route based on a regex of the AdapterName that is being invoked or so. But it is not sure right now if that is possible and what the performance impact is.
Though possible, this is not something supported by WL. You will not be able to get help from support in case something goes wrong (and it will). You have to keep in mind that all server cookies (e.g. session id) are per domain. Therefore when you're dynamically changing server URL you will loose them. Therefore WL server will treat your request as a new session, unrelated to an old (existing) one. This is not something specific to WL, this is how HTTP works.
WL keeps server URLs in two global properties - WL.AppProp.WORKLIGHT_ROOT_URL and WL.AppProp.APP_SERVICES_URL. You can override them thus changing server URLs.
First one is used for all requests triggered by developer (init, connect, login etc). Second one is used for miscellaneous internal functionality (e.g. encrypted cache).
Once again - this is a hack, definitely not a solution. Use with caution if at all:)
How About this,if we define our own function that will call some static properties and update them ?
function changeServerUrl(serverURL) {
WL.StaticAppProps.APP_SERVICES_URL = serverURL + WL.StaticAppProps.POSTFIX_APP_SERVICES_URL;
WL.StaticAppProps.WORKLIGHT_ROOT_URL = serverURL + WL.StaticAppProps.POSTFIX_WORKLIGHT_ROOT_URL;
WL.StaticAppProps.WORKLIGHT_BASE_URL = serverURL;
}
and call it
chnageServerUrl("http://"+yourServerIP+":"+PORT);
if you dig into the worklight.js file there is a function "setWLUrl(url)" that can be use to change the serevr URL.
call it like this and its done
setWLUrl("http://"+yourServerIP+":PORT");
its kind a hack but i think it should not have anny issue since its a function within there api.
Good Luck

Web Deploy API (deploy .zip package) Clarification

I'm using the web deploy API to deploy a web package (.zip file, created by MSDeploy.exe) to programmatically roll the package out to a server (we need to do some other things before we release the package which is why we're not doing it all in one go using MSDeploy.exe).
Here's the code I have. My question is really to clarify what is happening when this is executed. In the package parameters XML file I have the application name specified ("Default Web Site") but that's about it, there's no other params are specified in there. From testing the server it appears the package gets deployed successfully but my question is are any other settings on the server I'm deploying to getting changed without my knowledge, are any default settings published etc.? Things like security settings, directory browsing etc. that I might not be aware of? The code here seems to deploy the package but I'm anxious about using this on a production environment when I'm so unsure of how this API works. The MS documentation is not helpful (more like non-existant, actually).
DeploymentChangeSummary changes;
string packageToDeploy = "C:/MyPackageLocation.zip";
string packageParametersFile = "C:/MyPackageLocation.SetParameters.xml";
DeploymentBaseOptions destinationOptions = new DeploymentBaseOptions()
{
UserName = "MyUsername",
Password = "MyPassword",
ComputerName = "localhost"
};
using (DeploymentObject deploymentObject = DeploymentManager.CreateObject(DeploymentWellKnownProvider.Package,
packageToDeploy))
{
deploymentObject.SyncParameters.Load(packageParametersFile);
DeploymentSyncOptions syncOptions = new DeploymentSyncOptions();
syncOptions.WhatIf = false;
//Deploy the package to the server.
changes = deploymentObject.SyncTo(destinationOptions, syncOptions);
}
If anyone could clarify that this snippet should deploy a package to a web site application on a server, without changing any existing server settings (unless specified in the SetParameters.xml file) that would be really helpful. Any good resources on using the API or an explanation of how web deployment works behind the scenes would also be much appreciated!
The setparameters file just controls the value for the parameters defined in the package. A package might be doing much more than that. Web deploy has a concept of providers and any given package can have one or more providers.
If you want to make sure that the package is not changing server side settings the best approach you can take is to use the API but make the packages be deployed via Web Management Service. This will give you two benefits:
You can control what providers you allow through.
You can add users and give restricted permissions to them to deploy to their site or their folder etc.
The alternate approach is to:
In the package manually look at the archive.xml and look for the providers in the package. As long as you dont see any of the following providers that can cause server settings change such as apphostconfig or webserver or regkey (this is not a comprehensive list) you should be good. Runcommand is a provider that allows you to execute batch scripts or commands. While it is a good provider for admins themselves you need to consider whether you want to allow packages with such providers to run.
You can do the above mentioned inspection in code by calling getchildren on the deployment object you create out of the package and inspect the providers and the provider paths.