"Failed to Get Framework Assemblies Local Path During Pushing Edge Package" message on Azure Stream Analytics Edge Module deployment - azure-stream-analytics

I Had a very simple ASA edge job depolyed and running on a device for 1 week, and as of last thursday (11/07/2019), the module disappeared from my device and I can no longer add it. It returns the following message: "Failed to Get Framework Assemblies Local Path During Pushing Edge Package".
It looks like the ASA job definition is not being saved on the storage container. I tried to configure the storage account/container both manually and automatically, and when I clieck save, it shows thh operation successful message, the logs show that the job received an update, but if I open the storage account setting on the ASA job, it is not configured. If I explore the storage container, it´s empty.
the storage account is configured as blob account, hot, public accessible.
The region is Central (US).

It turns out it is a region bug. We created the ASA job on the East US region and it worked. Go figure...

Related

Intermittent HTTP error when loading files from ADLS Gen2 in Azure Databricks

I am getting an intermittent HTTP error when I try to load the contents of files in Azure Databricks from ADLS Gen2. The storage account has been mounted using a service principal associated with Databricks and has been given Storage Blob Data Contributor access through RBAC on the data lake storage account. A sample statement to load is
df = spark.read.format("orc").load("dbfs:/mnt/{storageaccount}/{filesystem}/{filename}")
The error message I get is:
Py4JJavaError: An error occurred while calling o214.load. : java.io.IOException: GET https://{storageaccount}.dfs.core.windows.net/{filesystem}/{filename}?timeout=90``` StatusCode=412 StatusDescription=The condition specified using HTTP conditional header(s) is not met.
ErrorCode=ConditionNotMet ErrorMessage=The condition specified using HTTP conditional header(s) is not met.
RequestId:51fbfff7-d01f-002b-49aa-4c89d5000000
Time:2019-08-06T22:55:14.5585584Z
This error is not with all the files in the filesystem. I can load most of the files. The error is just with some of the files. Not sure what the issue is here.
This has been resolved now. The underlying issue was due to a change at Microsoft end. This is the RCA I got from Microsoft Support:
There was a storage configuration that is turned on incorrectly during the latest storage tenant upgrade. This type of error would only show up for the namespace enabled account on the latest upgraded tenant. The mitigation for this issue is to turn off the configuration on the specific tenant, and we had kicked off the super sonic configuration rollout for the all the tenants. We have since added additional Storage upgrade validation for ADLS Gen 2 to help cover this type of scenario.
I had the same problem on one file today. Downloading the file, deleting it from storage and putting it back solved the problem.
Tried to rename file -> didn't work.
Edit: we have it on more files, random.
We worked around the problem by copying the entire folder to a new folder and rename it to original. Jobs run without problems again.
Still the question remains, why did the files end up in this situation?
Same issue here. After some research, it seems it was probably an If-Match eTag condition failure in the http GET request. Microsoft talk about how they will return error 412 when this happens in this post: https://azure.microsoft.com/de-de/blog/managing-concurrency-in-microsoft-azure-storage-2/
Regardless, Databricks seem to have resolved the issue on their end now.

WSO2 APIM 2.0 Gateway-Worker-Node: "the requested resource XXX is not available"

I have a gatewaymanager (GWM) with 2 worker nodes. When I deploy an API its pushed to the GWM and is available threre --> API-Call works fine.
I decided to synchronize the APIs from the GWM to the worker nodes via rsync. The filesystems under ~wso2/repository/deployment/server on the workernodes are synced and similar to the GWM node.
But when I call the API on a worker node I get this message:
<am:fault xmlns:am="http://wso2.org/apimanager"><am:code>404</am:code>
<am:type>Status report</am:type><am:message>Not Found</am:message>
<am:description>The requested resource (/XXX/1/foo) is not available.
</am:description>
</am:fault>
I also restarted the workes, but same result.
Did I miss something or is there a trigger to load the APIs on the workers to the cache, or something like this?
Faced same issue , when the contents of mediation files were changed.
**Solution which worked for me **
Demote your api to created
Ensure gateway is checked
Redeploy it

re-configuring a worklight application with analytics

After redeploying a worklight application, some configuration for analytics got lost and I'm trying to configure worklight with analytics again.
The dashboard shows "No data available" for time after the deployment although there are old records displayed for the time before the deployment of the application. So the db was not affected.
I set the wl.analytics.logs.forward property to "true" in worklight.properties;
also I set the wl.analytics.url of the db to be something like:
https://myserver:port/analytics/data
The dashboard is on
https://myserver:port/analytics/console
That is the URL for the analytics server.
Although if I put the db URL in a browser I get something like:
Error 404: java.io.FileNotFoundException: SRVE0190E: File not found: /data
Checked SystemOut.log and SystemErr.log (WAS logs) and I did not see errors there.
Does anybody know which is the XML I need to check in order to validate the configuration is OK for analytics? How could I troubleshoot this problem? Are there other logs I could check?
In the list of environment variables you gave I do not see any for username and password. Try to set:
wl.analytics.password=admin
wl.analytics.username=admin
It would be useful to see a wireshark trace, maybe you are not getting 403s. The Analytics data uploader generally has a small bit of protections and you have the option to keep or remove it.
#patbarron is correct about the multiple WAR files though. You need to send your analytics data to the /analytics-service context. The WAR analytics-service is the WAR that handles all the data processing, querying, etc. The other WAR analytics just handles the console UI.
When testing it might be beneficial to lower the
wl.analytics.queue and wl.analytics.queue.size, those values are for collecting data on the MobileFirst runtime server. Data is collected at the runtime server then sent to the analytics server. The larger these values are generally, the longer it will take to send. There are good to set for production

IBM MobileFirst Platform 6.3 Operational Analytics Failed installation for Tomcat

I have installed MobileFirst 6.3 appcenter console, worklight console successfully, they are operating fine on Tomcat/7.0.57. However when I try to install Operational Analytics, the documentation has the following
http://www-01.ibm.com/support/knowledgecenter/SSHS8R_6.3.0/com.ibm.worklight.installconfig.doc/monitor/c_op_analytics_installation_tomcat.html
I am using tomcat manager http://localhost:8080/html to deploy the war files. logging in as manager, with the manager-gui role.
worklight-analytics.war - deployed with no issues
when I select the worklight-analytics-service.war file and deploy in the GUI, it throws a blank page first, indicating "connection error", and when I refresh the page, on the status bar in Tomcat manager GUI, I get this message - "FAIL - Tried to use command /upload via a GET request but POST is required";
Please provide some direction on what I need to do get this fixed. I am not sure If I have provided all required information - please bear with me and ask, if anything relevant (obviously I can't figure out what is relevant yet) is required to debug.
So I was able to reproduce your error and I saw this in the logs:
java.lang.IllegalStateException:
org.apache.tomcat.util.http.fileupload.FileUploadBase$SizeLimitExceededException:
the request was rejected because its size (57353297) exceeds the
configured maximum (52428800)
It looks like by default, the web UI will only upload WARs of size 50MB or smaller. The analytics service WAR file is larger than this, so that is why this is failing. I was able to increase the limit by modifying the following lines in
/webapps/manager/WEB-INF/web.xml
<max-file-size>100000000</max-file-size>
<max-request-size>100000000</max-request-size>
This will increase the limit to 100MB. After I did this, I was able to successfully deploy the service WAR.
Just as a heads up, once you get the WAR deployed, you'll be presented with the login page. You'll need a tomcat user with the 'worklightadmin' role in order to get past the login screen.
The worklight-analytics-service WAR file does not have a user interface. It is simply referenced by the worklight-analytics WAR file. When both WARs have been deployed, can you see the analytics console? And does data load just fine? If so, then everything is fine. There is only an issue if you are unable to use the user interface provided by the worklight-analytics WAR file.

Windows Azure Console for Worker Role Cloud Service

I have a worker role cloud service that I have recently developed on my local machine. The service exposes a WCF interface that receives a file as a byte array, recompiles the file, converts it to the appropriate format, then stores it in Azure Storage. I managed to get everything working using the Azure Compute Emulator on my machine and published the service to Azure and... nothing. Running it on my machine again, it works as expected. When I was working on it on my computer, the Azure Compute Emulator's console output was essential in getting the application running.
Is there a similar functionality that can be tapped into on the Cloud Service via RDP? Such as starting/restarting the role at the command prompt or in power shell? If not, what is the best way to debug/log what the worker role is doing (without using Intellitrace)? I have diagnostics enabled in the project, but it doesn't seem to be giving me the same level of detail as the Computer Emulator console. I've rerun the role and corresponding .NET application again on localhost and was unable to find any possible errors in the console.
Edit: The Next Best Thing
Falling back to manual logging, I implemented a class that would feed text files into my Azure Storage account. Here's the code:
public class EventLogger
{
public static void Log(string message)
{
CloudBlobContainer cbc;
cbc = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue("StorageClientAccount"))
.CreateCloudBlobClient()
.GetContainerReference("errors");
cbc.CreateIfNotExist();
cbc.GetBlobReference(string.Format("event-{0}-{1}.txt", RoleEnvironment.CurrentRoleInstance.Id, DateTime.UtcNow.Ticks)).UploadText(message);
}
}
Calling ErrorLogger.Log() will create a new text file and record whatever message you put in there. I found an example in the answer below.
There is no console for worker roles that I'm aware of. If diagnostics isn't giving you any help, then you need to get a little hacky. Try tracing out messages and errors to blob storage yourself. Steve Marx has a good example of this here http://blog.smarx.com/posts/printf-here-in-the-cloud
As he notes in the article, this is not for production, just to help you find your problem.