I am trying to leverage Google Cloud Functions to run whenever I run an insert on a table within a specific dataset in my GCP project. From what I've seen from other stackoverflow questions, I know it is possible to use Eventarc (2nd Gen) to listen to cloud events and trigger Cloud Functions. From looking at my BigQuery logs, I think what I want is for the Cloud Function to trigger when the logs equal:
protoPayload.methodName:"google.cloud.bigquery.v2.JobService.InsertJob"
resource.labels.dataset_id:"the_specific_dataset"
However, after attempting to follow multiple guides, I'm hitting perplexing errors in cloud shell. Sources I've already tried to use: Google Codelabs and This Blogpost.
What I Ran
in CloudShell, I enabled everything to run it
gcloud config set project org-internal
PROJECT_ID=org-internal
gcloud services enable run.googleapis.com
gcloud services enable eventarc.googleapis.com
gcloud services enable logging.googleapis.com
gcloud services enable cloudbuild.googleapis.com
REGION=us-central1
gcloud config set run/region $REGION
gcloud config set run/platform managed
gcloud config set eventarc/location $REGION
Then I configured my service account in CloudShell to have the roles eventarc.eventReceiver / pubsub publisher
PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID
--format='value(projectNumber)')
gcloud projects add-iam-policy-binding $PROJECT_ID
--member serviceAccount:$PROJECT_NUMBER-compute#developer.gserviceaccount.com
--role roles/eventarc.eventReceiver
Then I deployed to CloudRun
SERVICE_NAME=hello gcloud run deploy $SERVICE_NAME \
--image=gcr.io/cloudrun/hello \ --allow-unauthenticated
I was able to successfully create a trigger for cloud pub/sub. Ran without issue. However, when I tried to apply the event filters specific to table inserts, I ran into issue after issue. Here's what I tried with the errors
gcloud eventarc triggers create $TRIGGER_NAME --destination-run-service=$SERVICE_NAME --destination-run-region=$REGION --event-filters="serviceName=bigquery.googleapis.com" --event-filters="protoPayload.methodName:google.cloud.bigquery.v2.JobService.InsertJob" --event-filters="resource.labels.dataset_id:the_specific_dataset"
The error: ERROR: (gcloud.eventarc.triggers.create) argument --event-filters: Bad syntax for dict arg: [protoPayload.methodName:google.cloud.bigquery.v2.JobService.InsertJob].
gcloud eventarc triggers create $TRIGGER_NAME --destination-run-service=$SERVICE_NAME --destination-run-region=$REGION --event-filters="type=google.cloud.audit.log.v1.written” --event-filters="serviceName=bigquery.googleapis.com" --event-filters methodName=google.cloud.bigquery.v2.JobService.InsertJob --event-filters resource.labels.dataset_id=the_specific_dataset
The error: ERROR: (gcloud.eventarc.triggers.create) INVALID_ARGUMENT: The request was invalid: invalid value for attribute 'type' in trigger.event_filters:
I tried a few other ways in formatting (e.g. with quotes and without) but nothing seems to be working. I guess my questions are -- can I filter on "resource.labels.dataset_id" and on "methodName=google.cloud.bigquery.v2.JobService.InsertJob"? If so, what am I doing wrong?
Related
I'm running a task with SimpleHTTPOperator on Airflow Composer. This task calls an API that runs on Cloud Run Service living in another project. This means I need a service account in order to access the project.
When I try to make a call to the api, I get the following error :
{secret_manager_client.py:88} ERROR - Google Cloud API Call Error (PermissionDenied): No access for Secret ID airflow-connections-call_to_api.
Did you add 'secretmanager.versions.access' permission?
What's a solution to such an issue ?
Context : Cloud Composer and Cloud Run live in 2 different Projects
This specific error is irrelevant to the cross project scenario. It seems that you have configured Composer/Airflow to use Secret Manager as the primary backend for connections and variables. However, according to the error message , the service account used by Composer is missing the secretmanager.versions.access permission to access the connection (call_to_api) you have configured for the API.
Check this part of the documentation.
When executing terraform apply, I get this error where I am being asked to enable IAM API for my project.
Error: Error creating service account: googleapi: Error 403: Identity and Access
Management (IAM) API has not been used in project [PROJECT-NUMBER] before or it is
disabled. Enable it by visiting
https://console.developers.google.com/apis/api/iam.googleapis.com/overview?
project=[PROJECT-NUMBER] then retry. If you enabled this API recently, wait a few
minutes for the action to propagate to our systems and retry.,
accessNotConfigured
When I attempt to enable it using gcloud, the service enable just hangs. Is there any way to get more information?
According to the Google Dashboard, everything is green.
I am also seeing the same issue using the UI.
$ gcloud services enable iam.googleapis.com container.googleapis.com
Error Message
ERROR: gcloud crashed (WaitException): last_result=True, last_retrial=178, time_passed_ms=1790337,time_to_wait=10000
Add --log-http to (any) gcloud command to get detailed logging of the underlying API calls. These may provide more details on where the error occurs.
You may wish to explicitly reference the project too: --project=....
Does IAM need to be enabled? It's such a foundational service, I'm surprised anything would work if it weren't enabled.
Trying to run the AWS Logs Agent inside a docker container running on AWS ECS Fargate.
This has been working fine under EC2 for several years. Under Fargate context, it does not seem to be able to resolve the task role being passed to it.
Permissions on the Task Role should be good... I've even tried giving it full CloudWatch permissions to eliminate that as a reason.
I've managed to hack the python based launcher script to add a --debug flag which gave me this in the log:
Caught retryable HTTP exception while making metadata service request to
http://169.254.169.254/latest/meta-data/iam/security-credentials
It does not appear to be properly resolving the credentials that are passed into the task as the 'Task Role'
I managed to find a hack workaround, that may illustrate what I believe to be a bug or inadequacy in the agent. I had to hack the launcher script using sed as follows:
sed -i "s|HTTPS_PROXY|AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI HTTPS_PROXY|"
/var/awslogs/bin/awslogs-agent-launcher.sh
This essentially de-references the ENV variable holding the URI for retrieving the task role and passes it to the agent's launcher.
It results in something like this:
/usr/bin/env -i AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/f4ca7e30-b73f-4919-ae14-567b1262b27b (etc...)
With this in place, I restart the log agent and it works as expected.
Note that you can do something like this to add --debug flag to the launcher also which was very helpful in trying to figure out where it went astray.
Is it possible to get build logs using api call ?
gcloud builds log BUILD_ID
I have to do it using my nodejs app
Thanks,
Yes.
The CLI command would be of the form:
BUILD_ID=[[SOME-BUILD-ID]]
gcloud logging read "resource.type=\"build\" resource.labels.build_id=\"${BUILD_ID}\" " \
--project=${PROJECT} ...
NB If you augment the above command with the global --log-http, the output will include details of the underlying API methods. This is a good way to map gcloud commands to APIs.
The underlying API is logging.googleapis.com/v2
A good approach is to build the filter using Logs Viewer:
https://console.cloud.google.com/logs/viewer?project=${PROJECT}&advancedFilter=resource.type%3D%22build%22
Or, if like me, you like playing with jq:
BUILD_ID=...
gcloud logging read "resource.type=\"build\" resource.labels.build_id=\"${BUILD_ID}\" " \
--project=${PROJECT} \
--limit=50 \
--format="json" \
| jq -r .[].textPayload
You may interact with any Google API using the wonderful and understated APIs Explorer. Here's API Explorer pre-selected with logging:
https://developers.google.com/apis-explorer/#search/logging/logging/v2/logging.entries.list
You mentioned using Node.JS, Google provides SDKs for all its services using a bunch of popular languages and runtimes, here's a page describing the Logging API with Node.JS examples:
https://cloud.google.com/logging/docs/reference/libraries#client-libraries-install-nodejs
I have my application deployed in AWS EKS Cluster and now I want to update the deployment with the new image that I created from the recent GIT commit.
I did try to use:
kubectl set image deployment/mydeploy mydeploy=ECR:2.0
error: unable to find container named "stag-simpleui-deployment"
I also tried:
kubectl rolling-update mydeploy mydeploy.2.0 --image=ECR:2.0
Command "rolling-update" is deprecated, use "rollout" instead
Error from server (NotFound): replicationcontrollers "stag-simpleui-deployment" not found
It is confusing with so many articles say different ways, but none is working.
I was able to crack it. In below command line "mydeploy=" should be same as your image name in your "kubectl edit deployment mydeploy"
kubectl set image deployment/mydeploy mydeploy=ECR:2.0