re-configuring a worklight application with analytics - ibm-mobilefirst

After redeploying a worklight application, some configuration for analytics got lost and I'm trying to configure worklight with analytics again.
The dashboard shows "No data available" for time after the deployment although there are old records displayed for the time before the deployment of the application. So the db was not affected.
I set the wl.analytics.logs.forward property to "true" in worklight.properties;
also I set the wl.analytics.url of the db to be something like:
https://myserver:port/analytics/data
The dashboard is on
https://myserver:port/analytics/console
That is the URL for the analytics server.
Although if I put the db URL in a browser I get something like:
Error 404: java.io.FileNotFoundException: SRVE0190E: File not found: /data
Checked SystemOut.log and SystemErr.log (WAS logs) and I did not see errors there.
Does anybody know which is the XML I need to check in order to validate the configuration is OK for analytics? How could I troubleshoot this problem? Are there other logs I could check?

In the list of environment variables you gave I do not see any for username and password. Try to set:
wl.analytics.password=admin
wl.analytics.username=admin
It would be useful to see a wireshark trace, maybe you are not getting 403s. The Analytics data uploader generally has a small bit of protections and you have the option to keep or remove it.
#patbarron is correct about the multiple WAR files though. You need to send your analytics data to the /analytics-service context. The WAR analytics-service is the WAR that handles all the data processing, querying, etc. The other WAR analytics just handles the console UI.
When testing it might be beneficial to lower the
wl.analytics.queue and wl.analytics.queue.size, those values are for collecting data on the MobileFirst runtime server. Data is collected at the runtime server then sent to the analytics server. The larger these values are generally, the longer it will take to send. There are good to set for production

Related

Intermittent HTTP error when loading files from ADLS Gen2 in Azure Databricks

I am getting an intermittent HTTP error when I try to load the contents of files in Azure Databricks from ADLS Gen2. The storage account has been mounted using a service principal associated with Databricks and has been given Storage Blob Data Contributor access through RBAC on the data lake storage account. A sample statement to load is
df = spark.read.format("orc").load("dbfs:/mnt/{storageaccount}/{filesystem}/{filename}")
The error message I get is:
Py4JJavaError: An error occurred while calling o214.load. : java.io.IOException: GET https://{storageaccount}.dfs.core.windows.net/{filesystem}/{filename}?timeout=90``` StatusCode=412 StatusDescription=The condition specified using HTTP conditional header(s) is not met.
ErrorCode=ConditionNotMet ErrorMessage=The condition specified using HTTP conditional header(s) is not met.
RequestId:51fbfff7-d01f-002b-49aa-4c89d5000000
Time:2019-08-06T22:55:14.5585584Z
This error is not with all the files in the filesystem. I can load most of the files. The error is just with some of the files. Not sure what the issue is here.
This has been resolved now. The underlying issue was due to a change at Microsoft end. This is the RCA I got from Microsoft Support:
There was a storage configuration that is turned on incorrectly during the latest storage tenant upgrade. This type of error would only show up for the namespace enabled account on the latest upgraded tenant. The mitigation for this issue is to turn off the configuration on the specific tenant, and we had kicked off the super sonic configuration rollout for the all the tenants. We have since added additional Storage upgrade validation for ADLS Gen 2 to help cover this type of scenario.
I had the same problem on one file today. Downloading the file, deleting it from storage and putting it back solved the problem.
Tried to rename file -> didn't work.
Edit: we have it on more files, random.
We worked around the problem by copying the entire folder to a new folder and rename it to original. Jobs run without problems again.
Still the question remains, why did the files end up in this situation?
Same issue here. After some research, it seems it was probably an If-Match eTag condition failure in the http GET request. Microsoft talk about how they will return error 412 when this happens in this post: https://azure.microsoft.com/de-de/blog/managing-concurrency-in-microsoft-azure-storage-2/
Regardless, Databricks seem to have resolved the issue on their end now.

Debugging Parse Cloud-Code

What would be the best way to debug Parse Cloud Code? Currently it's a mess of logging to the console and checking logs. Does anyone have a good workable solution?
During development, you should begin by testing against a local hosted server. I.e., I use VS Code. You can set breakpoints and watch variables for their values. You can set up a tool like ngrok to get a remote URL for your local endpoint so you can test with non-local hosted clients if you'd like.
We also use Slack extensively. We've created our own slack bot, and it has several channels it reports relevant information too, triggered from our parse-server. One of these is a dev error channel. Instead of console.logs, which are hard to sift through and find what you're looking for, we push important information to Slack. We don't switch every single console.log to a slack message, just the important "Hey something went wrong here's the information" messages. This brings them to our attention so we can identify and resolve them way faster. Slack is awesome. I recommend using slack, even on a solo project.
at the moment you can access your Logs using a console.log() or console.error() for functions and all general logs of everything that happens with your app, at Back4App you can access using: Server Settings -> Logs -> Settings -> Server System Log.
Or functions and all logs generated by Parse server, they're: request.log.info() and request.log.error(), at Back4App you can access using: Dashboard -> Logs.

Getting VersionConflictEngineException in IBM Mobile First 6.3

I'm receiving the following message on server log in IBM Mobile First 6.3 every time an Adapter is getting called:
Stacktrace
[ERROR ] Error sending bulk request: java.lang.RuntimeException:
failure in bulk execution: [2]: index [worklight], type [devices], id
[b2deefe7-0d15-4ed4-b199-7e42440fc372], message
[VersionConflictEngineException[[worklight][1]
[devices][b2deefe7-0d15-4ed4-b199-7e42440fc372]: version conflict,
current [58], provided [57]]] at
com.ibm.elasticsearch.servlet.DataReceiver.processData(DataReceiver.java:132)
at
com.ibm.elasticsearch.servlet.DataReceiver.processDataLegacy(DataReceiver.java:85)
at sun.reflect.GeneratedMethodAccessor57.invoke(Unknown Source) ...
The adapter is executed correctly and the response is returned to the app.
Any idea why this error is happening?
Help will be appreciated.
Thanks.
This is an internal error in analytics. The error itself is actually harmless, however the analytics platform should be catching it... A defect will be logged for the message. In the meantime, if you're not using analytics, you can disable it by removing the WAR files from the Liberty server.
If you are using analytics, then I would recommend clearing out the analytics data folder and restarting the IMF platform (this would remove any data you have stored in analytics). This is assuming that you are running in development mode. The analytics data folder can be in the same directory as the server.xml file for your Liberty server.

IBM MobileFirst Platform 6.3 Operational Analytics Failed installation for Tomcat

I have installed MobileFirst 6.3 appcenter console, worklight console successfully, they are operating fine on Tomcat/7.0.57. However when I try to install Operational Analytics, the documentation has the following
http://www-01.ibm.com/support/knowledgecenter/SSHS8R_6.3.0/com.ibm.worklight.installconfig.doc/monitor/c_op_analytics_installation_tomcat.html
I am using tomcat manager http://localhost:8080/html to deploy the war files. logging in as manager, with the manager-gui role.
worklight-analytics.war - deployed with no issues
when I select the worklight-analytics-service.war file and deploy in the GUI, it throws a blank page first, indicating "connection error", and when I refresh the page, on the status bar in Tomcat manager GUI, I get this message - "FAIL - Tried to use command /upload via a GET request but POST is required";
Please provide some direction on what I need to do get this fixed. I am not sure If I have provided all required information - please bear with me and ask, if anything relevant (obviously I can't figure out what is relevant yet) is required to debug.
So I was able to reproduce your error and I saw this in the logs:
java.lang.IllegalStateException:
org.apache.tomcat.util.http.fileupload.FileUploadBase$SizeLimitExceededException:
the request was rejected because its size (57353297) exceeds the
configured maximum (52428800)
It looks like by default, the web UI will only upload WARs of size 50MB or smaller. The analytics service WAR file is larger than this, so that is why this is failing. I was able to increase the limit by modifying the following lines in
/webapps/manager/WEB-INF/web.xml
<max-file-size>100000000</max-file-size>
<max-request-size>100000000</max-request-size>
This will increase the limit to 100MB. After I did this, I was able to successfully deploy the service WAR.
Just as a heads up, once you get the WAR deployed, you'll be presented with the login page. You'll need a tomcat user with the 'worklightadmin' role in order to get past the login screen.
The worklight-analytics-service WAR file does not have a user interface. It is simply referenced by the worklight-analytics WAR file. When both WARs have been deployed, can you see the analytics console? And does data load just fine? If so, then everything is fine. There is only an issue if you are unable to use the user interface provided by the worklight-analytics WAR file.

Sitefinity Staging & Synchronization - destination doesn't contain a site with name 'SFDev'

I am trying to use sitefinity Staging & Synchronization feature.
I did exactly what is told in this youtube video https://www.youtube.com/watch?v=O-mbXODZ0MI
But receiving following error.
You cannot sync the data, because the destination doesn't contain a site with name 'SFDev'.
I am moving data from SFDev to SFStaging. So how could the destination name be SFDev.
SFDev - Running on Casini Web server
SFStaging - running on local IIS
Not able to find any information on this error. Any suggestions here please?
Product Version: 7.1.5200.0
The site names have to be the same. When you make the first move it is all "manual". The code and DBs must match from the beginning. The sync tool is NOT a database mover it only sync specific areas of it. Please reference these instructions.
http://www.sitefinity.com/documentation/documentationarticles/installation-and-administration-guide/syncing-of-data
I don't think the source or destination site can be running Casini Web server while executing a SiteSync.
Have a look here:
http://www.sitefinity.com/documentation/documentationarticles/prerequisites-and-restrictions
All sites must be deployed on IIS.