Can we run AirSim on cloud server such as Google Cloud or Azure? - airsim

I am trying to run AirSim simulation on cloud servers since my computer cannot afford run it. It causes overheat.
I received the errors below
Azure
"insufficient driver or hardware"
"LogInit: Error: Linux_PlatformCreateOpenGLContextCore - Could not create OpenGL 4.3 context, SDL error: 'Could not create GL context: BadValue (integer parameter out of range for operation)'"
google cloud
"LogInit: Warning: Could not initialize SDL: XRandR support is required but not available
LogInit: Error: FLinuxApplication::CreateLinuxApplication() : InitSDL() failed, cannot create application instance."

Related

Calling an API that runs on another GCP project with Airflow Composer

I'm running a task with SimpleHTTPOperator on Airflow Composer. This task calls an API that runs on Cloud Run Service living in another project. This means I need a service account in order to access the project.
When I try to make a call to the api, I get the following error :
{secret_manager_client.py:88} ERROR - Google Cloud API Call Error (PermissionDenied): No access for Secret ID airflow-connections-call_to_api.
Did you add 'secretmanager.versions.access' permission?
What's a solution to such an issue ?
Context : Cloud Composer and Cloud Run live in 2 different Projects
This specific error is irrelevant to the cross project scenario. It seems that you have configured Composer/Airflow to use Secret Manager as the primary backend for connections and variables. However, according to the error message , the service account used by Composer is missing the secretmanager.versions.access permission to access the connection (call_to_api) you have configured for the API.
Check this part of the documentation.

How to troubleshoot enabling API services in GCP using gcloud

When executing terraform apply, I get this error where I am being asked to enable IAM API for my project.
Error: Error creating service account: googleapi: Error 403: Identity and Access
Management (IAM) API has not been used in project [PROJECT-NUMBER] before or it is
disabled. Enable it by visiting
https://console.developers.google.com/apis/api/iam.googleapis.com/overview?
project=[PROJECT-NUMBER] then retry. If you enabled this API recently, wait a few
minutes for the action to propagate to our systems and retry.,
accessNotConfigured
When I attempt to enable it using gcloud, the service enable just hangs. Is there any way to get more information?
According to the Google Dashboard, everything is green.
I am also seeing the same issue using the UI.
$ gcloud services enable iam.googleapis.com container.googleapis.com
Error Message
ERROR: gcloud crashed (WaitException): last_result=True, last_retrial=178, time_passed_ms=1790337,time_to_wait=10000
Add --log-http to (any) gcloud command to get detailed logging of the underlying API calls. These may provide more details on where the error occurs.
You may wish to explicitly reference the project too: --project=....
Does IAM need to be enabled? It's such a foundational service, I'm surprised anything would work if it weren't enabled.

Failed to load APIs - Error 403 on Azure Mobile Service

I have staging Azure Mobile Service that has suddenly stopped working and started to report errors when called by other apps.
The direct Mobile Service URL is reporting "Error 403 - This web app is stopped." error at https://b8akjsms2-st.azure-mobile.net/ I am also unable to access the api from the Azure portal which throw this message:
Failed to download zip file for path '/site/repository/service/api/' in Mobile Service 'b8akJSMS2-st' If you contact a support representative please include this correlation identifier: 4ebe635c-bbb7-af06-a71a-532f0467e828, the time of error: 2016-06-10 11:40:36Z, and the error id: ZE6.
How can I resolve this issue?
I can see that the service is on the free tier and is over CPU quota limits. These will reset at midnight UTC each day (5pm PST). Please feel free to contact us if you have any questions.

Google Cloud Dataflow error: "The Application Default Credentials are not available"

We have a Google Cloud Dataflow job, which writes to Bigtable (via HBase API). Unfortunately, it fails due to:
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information. at com.google.bigtable.repackaged.com.google.auth.oauth2.DefaultCredentialsProvider.getDefaultCredentials(DefaultCredentialsProvider.java:74) at com.google.bigtable.repackaged.com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:54) at com.google.bigtable.repackaged.com.google.cloud.config.CredentialFactory.getApplicationDefaultCredential(CredentialFactory.java:181) at com.google.bigtable.repackaged.com.google.cloud.config.CredentialFactory.getCredentials(CredentialFactory.java:100) at com.google.bigtable.repackaged.com.google.cloud.grpc.io.CredentialInterceptorCache.getCredentialsInterceptor(CredentialInterceptorCache.java:85) at com.google.bigtable.repackaged.com.google.cloud.grpc.BigtableSession.<init>(BigtableSession.java:257) at org.apache.hadoop.hbase.client.AbstractBigtableConnection.<init>(AbstractBigtableConnection.java:123) at org.apache.hadoop.hbase.client.AbstractBigtableConnection.<init>(AbstractBigtableConnection.java:91) at com.google.cloud.bigtable.hbase1_0.BigtableConnection.<init>(BigtableConnection.java:33) at com.google.cloud.bigtable.dataflow.CloudBigtableConnectionPool$1.<init>(CloudBigtableConnectionPool.java:72) at com.google.cloud.bigtable.dataflow.CloudBigtableConnectionPool.createConnection(CloudBigtableConnectionPool.java:72) at com.google.cloud.bigtable.dataflow.CloudBigtableConnectionPool.getConnection(CloudBigtableConnectionPool.java:64) at com.google.cloud.bigtable.dataflow.CloudBigtableConnectionPool.getConnection(CloudBigtableConnectionPool.java:57) at com.google.cloud.bigtable.dataflow.AbstractCloudBigtableTableDoFn.getConnection(AbstractCloudBigtableTableDoFn.java:96) at com.google.cloud.bigtable.dataflow.CloudBigtableIO$CloudBigtableSingleTableBufferedWriteFn.getBufferedMutator(CloudBigtableIO.java:836) at com.google.cloud.bigtable.dataflow.CloudBigtableIO$CloudBigtableSingleTableBufferedWriteFn.processElement(CloudBigtableIO.java:861)
Which makes very little sense, because the job is already running on Cloud Dataflow service/VMs.
The Cloud Dataflow job id: 2016-05-13_11_11_57-8485496303848899541
We are using bigtable-hbase-dataflow version 0.3.0, and we want to use HBase API.
I believe this is a known issue where GCE instances are very rarely unable to get the default credentials during startup.
We have been working on a fix which should be part of the next release (1.6.0) which should be coming soon. In the meantime we'd suggest re-submitting the job which should work. If you run into problems consistently or want to discuss other workarounds (such as backporting the 1.6.0 fix) please reach out to us.
1.7.0 is released so this should be fixed now https://cloud.google.com/dataflow/release-notes/release-notes-java

Repeated IBM bluemix Node Red app crashing; status 1

My Node Red application in IBM BlueMix is repeatedly crashing - once an hour - with no real error message other than "exited with status: 1."
How can I troubleshoot this issue?
Is there someone from IBM BlueMix support that monitors this that could take a look?
I looked at my logs and there's nothing in there that really says what's going on.
Edit per requests:
The regular log for "OUT/ERR" is scrolling so fast with HTTPD logs that I can't get it to copy/paste. Filtering to "ERR" Channel the only thing I see is below. I believe this is an error which occurs during deploy when the application restarts.
[App/0] ERR js-bson: Failed to load c++ bson extension, using pure JS version
My Node Red application is gathering data from Wink, LIFX, and other IoT services and compiles them together into a Freeboard dashboard.
Caught crash on screenshot here -- not enough cred to post images so it'll only post as a link
The zlib error was fixed in the 0.13.2 Node-RED release (that shipped 19/02/16).
If you re-stage your application is should pick up the new version of Node-RED
You can re-stage the application using the cf command line management application:
cf restage <app name>