When executing terraform apply, I get this error where I am being asked to enable IAM API for my project.
Error: Error creating service account: googleapi: Error 403: Identity and Access
Management (IAM) API has not been used in project [PROJECT-NUMBER] before or it is
disabled. Enable it by visiting
https://console.developers.google.com/apis/api/iam.googleapis.com/overview?
project=[PROJECT-NUMBER] then retry. If you enabled this API recently, wait a few
minutes for the action to propagate to our systems and retry.,
accessNotConfigured
When I attempt to enable it using gcloud, the service enable just hangs. Is there any way to get more information?
According to the Google Dashboard, everything is green.
I am also seeing the same issue using the UI.
$ gcloud services enable iam.googleapis.com container.googleapis.com
Error Message
ERROR: gcloud crashed (WaitException): last_result=True, last_retrial=178, time_passed_ms=1790337,time_to_wait=10000
Add --log-http to (any) gcloud command to get detailed logging of the underlying API calls. These may provide more details on where the error occurs.
You may wish to explicitly reference the project too: --project=....
Does IAM need to be enabled? It's such a foundational service, I'm surprised anything would work if it weren't enabled.
Related
I am following this tutorial to build a Cloud Function that triggers a DAG run. I have run into a permission issue. Upon the function being triggered and thus trying to run the DAG, I get a permission error message. It reads as follows:
Service account does not have permission to access the IAP-protected application.
I have followed the recommendation in the tutorial to have a service account with the Composer User role. What am I missing?
Note: I am calling Airflow version 2's Stable REST API and my Composer is version 1.
-Diana
I found a perhaps duplicate question here:
Receiving HTTP 401 when accessing Cloud Composer's Airflow Rest API
As Seng Cheong noted in their answer, the reason for this error is that Google Cloud seems to have issues adding service account IDs that are longer than 64 characters to the Airflow list of users. Upon changing my service account ID to one <= 64 characters, I was able to trigger the DAG successfully. If you can't make your service account ID shorter, then Google documentation suggests adding the "numeric user id" corresponding to your service account directly. The steps for how to do so can be found here: https://cloud.google.com/composer/docs/access-airflow-api#access_airflow_rest_api_using_a_service_account
Best of luck friend
I'm running a task with SimpleHTTPOperator on Airflow Composer. This task calls an API that runs on Cloud Run Service living in another project. This means I need a service account in order to access the project.
When I try to make a call to the api, I get the following error :
{secret_manager_client.py:88} ERROR - Google Cloud API Call Error (PermissionDenied): No access for Secret ID airflow-connections-call_to_api.
Did you add 'secretmanager.versions.access' permission?
What's a solution to such an issue ?
Context : Cloud Composer and Cloud Run live in 2 different Projects
This specific error is irrelevant to the cross project scenario. It seems that you have configured Composer/Airflow to use Secret Manager as the primary backend for connections and variables. However, according to the error message , the service account used by Composer is missing the secretmanager.versions.access permission to access the connection (call_to_api) you have configured for the API.
Check this part of the documentation.
I am trying to leverage Google Cloud Functions to run whenever I run an insert on a table within a specific dataset in my GCP project. From what I've seen from other stackoverflow questions, I know it is possible to use Eventarc (2nd Gen) to listen to cloud events and trigger Cloud Functions. From looking at my BigQuery logs, I think what I want is for the Cloud Function to trigger when the logs equal:
protoPayload.methodName:"google.cloud.bigquery.v2.JobService.InsertJob"
resource.labels.dataset_id:"the_specific_dataset"
However, after attempting to follow multiple guides, I'm hitting perplexing errors in cloud shell. Sources I've already tried to use: Google Codelabs and This Blogpost.
What I Ran
in CloudShell, I enabled everything to run it
gcloud config set project org-internal
PROJECT_ID=org-internal
gcloud services enable run.googleapis.com
gcloud services enable eventarc.googleapis.com
gcloud services enable logging.googleapis.com
gcloud services enable cloudbuild.googleapis.com
REGION=us-central1
gcloud config set run/region $REGION
gcloud config set run/platform managed
gcloud config set eventarc/location $REGION
Then I configured my service account in CloudShell to have the roles eventarc.eventReceiver / pubsub publisher
PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID
--format='value(projectNumber)')
gcloud projects add-iam-policy-binding $PROJECT_ID
--member serviceAccount:$PROJECT_NUMBER-compute#developer.gserviceaccount.com
--role roles/eventarc.eventReceiver
Then I deployed to CloudRun
SERVICE_NAME=hello gcloud run deploy $SERVICE_NAME \
--image=gcr.io/cloudrun/hello \ --allow-unauthenticated
I was able to successfully create a trigger for cloud pub/sub. Ran without issue. However, when I tried to apply the event filters specific to table inserts, I ran into issue after issue. Here's what I tried with the errors
gcloud eventarc triggers create $TRIGGER_NAME --destination-run-service=$SERVICE_NAME --destination-run-region=$REGION --event-filters="serviceName=bigquery.googleapis.com" --event-filters="protoPayload.methodName:google.cloud.bigquery.v2.JobService.InsertJob" --event-filters="resource.labels.dataset_id:the_specific_dataset"
The error: ERROR: (gcloud.eventarc.triggers.create) argument --event-filters: Bad syntax for dict arg: [protoPayload.methodName:google.cloud.bigquery.v2.JobService.InsertJob].
gcloud eventarc triggers create $TRIGGER_NAME --destination-run-service=$SERVICE_NAME --destination-run-region=$REGION --event-filters="type=google.cloud.audit.log.v1.written” --event-filters="serviceName=bigquery.googleapis.com" --event-filters methodName=google.cloud.bigquery.v2.JobService.InsertJob --event-filters resource.labels.dataset_id=the_specific_dataset
The error: ERROR: (gcloud.eventarc.triggers.create) INVALID_ARGUMENT: The request was invalid: invalid value for attribute 'type' in trigger.event_filters:
I tried a few other ways in formatting (e.g. with quotes and without) but nothing seems to be working. I guess my questions are -- can I filter on "resource.labels.dataset_id" and on "methodName=google.cloud.bigquery.v2.JobService.InsertJob"? If so, what am I doing wrong?
What would be the best way to debug Parse Cloud Code? Currently it's a mess of logging to the console and checking logs. Does anyone have a good workable solution?
During development, you should begin by testing against a local hosted server. I.e., I use VS Code. You can set breakpoints and watch variables for their values. You can set up a tool like ngrok to get a remote URL for your local endpoint so you can test with non-local hosted clients if you'd like.
We also use Slack extensively. We've created our own slack bot, and it has several channels it reports relevant information too, triggered from our parse-server. One of these is a dev error channel. Instead of console.logs, which are hard to sift through and find what you're looking for, we push important information to Slack. We don't switch every single console.log to a slack message, just the important "Hey something went wrong here's the information" messages. This brings them to our attention so we can identify and resolve them way faster. Slack is awesome. I recommend using slack, even on a solo project.
at the moment you can access your Logs using a console.log() or console.error() for functions and all general logs of everything that happens with your app, at Back4App you can access using: Server Settings -> Logs -> Settings -> Server System Log.
Or functions and all logs generated by Parse server, they're: request.log.info() and request.log.error(), at Back4App you can access using: Dashboard -> Logs.
Every time I use bq on a Cloud Compute instance, I get this:
/usr/local/share/google/google-cloud-sdk/platform/bq/third_party/oauth2client/contrib/gce.py:73: UserWarning: You have requested explicit scopes to be used with a GCE service account.
Using this argument will have no effect on the actual scopes for tokens
requested. These scopes are set at VM instance creation time and
can't be overridden in the request.
warnings.warn(_SCOPES_WARNING)
This is a default micro in f1 with Debian 8. I gave this instance access to all Cloud APIs and its service account is also an owner of a project. I run gcloud init. But this error persists.
Is there something wrong?
I noticed that this warning did not appear on an older instance running SDK version 0.9.85, however I now get it when creating a new instance or upgrading the the latest Gcloud SDK.
The scopes warning can be safely ignored, as it's just telling you that the only scopes that will be used are the ones specified at instance creation time, which is the expected behavior of the default GCE service account.
It seems the 'bq' tool doesn't distinguish between the default service account on GCE and a regular service account and always tries to set the scopes explicitly. The warning comes from oauth2client, and it looks like it didn't display this warning in versions prior to v2.0.0.
I've created public issue to track this which you can star to get updates:
https://code.google.com/p/google-bigquery/issues/detail?id=557