Calling an API that runs on another GCP project with Airflow Composer - api

I'm running a task with SimpleHTTPOperator on Airflow Composer. This task calls an API that runs on Cloud Run Service living in another project. This means I need a service account in order to access the project.
When I try to make a call to the api, I get the following error :
{secret_manager_client.py:88} ERROR - Google Cloud API Call Error (PermissionDenied): No access for Secret ID airflow-connections-call_to_api.
Did you add 'secretmanager.versions.access' permission?
What's a solution to such an issue ?
Context : Cloud Composer and Cloud Run live in 2 different Projects

This specific error is irrelevant to the cross project scenario. It seems that you have configured Composer/Airflow to use Secret Manager as the primary backend for connections and variables. However, according to the error message , the service account used by Composer is missing the secretmanager.versions.access permission to access the connection (call_to_api) you have configured for the API.
Check this part of the documentation.

Related

Trigger DAG Run - 403

I am following this tutorial to build a Cloud Function that triggers a DAG run. I have run into a permission issue. Upon the function being triggered and thus trying to run the DAG, I get a permission error message. It reads as follows:
Service account does not have permission to access the IAP-protected application.
I have followed the recommendation in the tutorial to have a service account with the Composer User role. What am I missing?
Note: I am calling Airflow version 2's Stable REST API and my Composer is version 1.
-Diana
I found a perhaps duplicate question here:
Receiving HTTP 401 when accessing Cloud Composer's Airflow Rest API
As Seng Cheong noted in their answer, the reason for this error is that Google Cloud seems to have issues adding service account IDs that are longer than 64 characters to the Airflow list of users. Upon changing my service account ID to one <= 64 characters, I was able to trigger the DAG successfully. If you can't make your service account ID shorter, then Google documentation suggests adding the "numeric user id" corresponding to your service account directly. The steps for how to do so can be found here: https://cloud.google.com/composer/docs/access-airflow-api#access_airflow_rest_api_using_a_service_account
Best of luck friend

How to troubleshoot enabling API services in GCP using gcloud

When executing terraform apply, I get this error where I am being asked to enable IAM API for my project.
Error: Error creating service account: googleapi: Error 403: Identity and Access
Management (IAM) API has not been used in project [PROJECT-NUMBER] before or it is
disabled. Enable it by visiting
https://console.developers.google.com/apis/api/iam.googleapis.com/overview?
project=[PROJECT-NUMBER] then retry. If you enabled this API recently, wait a few
minutes for the action to propagate to our systems and retry.,
accessNotConfigured
When I attempt to enable it using gcloud, the service enable just hangs. Is there any way to get more information?
According to the Google Dashboard, everything is green.
I am also seeing the same issue using the UI.
$ gcloud services enable iam.googleapis.com container.googleapis.com
Error Message
ERROR: gcloud crashed (WaitException): last_result=True, last_retrial=178, time_passed_ms=1790337,time_to_wait=10000
Add --log-http to (any) gcloud command to get detailed logging of the underlying API calls. These may provide more details on where the error occurs.
You may wish to explicitly reference the project too: --project=....
Does IAM need to be enabled? It's such a foundational service, I'm surprised anything would work if it weren't enabled.

internal server error connecting to Aurora

In VS 2017 I created a AWS Serverless Application (.NET Core - C#). I have a RDS (Aurora) with data in it.
I added MySql.Data to the project using NuGet.
Created a new controller to get the data out of the DB.
Created a method and model to Get data.
Built the project and ran it locally in VS.
I was able to use Postman to Get data from the API. GREAT!
Right clicked the project and selected Publish to AWS Lambda. Everything published and got the new URL.
when using the url/api/method. I get a 500 return. Tried another Controller that just returns values with no DB queries and that works. Any ideas?
First thing you should do is check the CloudWatch logs for your function for the source of the error (as 500 indicates an Internal Server Error i.e. your code throws an exception). Add logging into your function as needed if you don't get anything useful there.
Access control is one likely candidate. Is your database accessible from your lambda and does the deployed function correctly receive the database credentials?

Enable Cloud Vision API to access a file on Cloud Storage

i have already seen there are some similar questions but none of them actually provide a full answer.
Since I cannot comment in that thread, i am opening a new one.
How do I address Brandon's comment below?
"...
In order to use the Cloud Vision API with a non-public GCS object,
you'll need to send OAuth authentication information along with your
request for a user or service account which has permission to read the
GCS object."?
I have the json file the system gave me as described here when I created the service account.
I am trying to run the api from a python script.
It is not clear how to use it.
I'd recommend to use the Vision API Client Library for python to perform the call. You can install it on your machine (ideally in a virtualenv) by running the following command:
pip install --upgrade google-cloud-vision
Next, You'll need to set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the file path of the JSON file that contains your service account key. For example, on a Linux machine you'd do it like this:
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json"
Finally, you'll just have to call the Vision API client's method you desire (for example here the label_detection method) like so:
def detect_labels():
"""Detects labels in the file located in Google Cloud Storage."""
client = vision.ImageAnnotatorClient()
image = types.Image()
image.source.image_uri = "gs://bucket_name/path_to_image_object"
response = client.label_detection(image=image)
labels = response.label_annotations
print('Labels:')
for label in labels:
print(label.description)
By initialyzing the client with no parameter, the library will automatically look for the GOOGLE_APPLICATION_CREDENTIALS environment variable you've previously set and run on behalf of this service account. If you granted it permissions to access the file, it'll run successfully.

Windows Azure Console for Worker Role Cloud Service

I have a worker role cloud service that I have recently developed on my local machine. The service exposes a WCF interface that receives a file as a byte array, recompiles the file, converts it to the appropriate format, then stores it in Azure Storage. I managed to get everything working using the Azure Compute Emulator on my machine and published the service to Azure and... nothing. Running it on my machine again, it works as expected. When I was working on it on my computer, the Azure Compute Emulator's console output was essential in getting the application running.
Is there a similar functionality that can be tapped into on the Cloud Service via RDP? Such as starting/restarting the role at the command prompt or in power shell? If not, what is the best way to debug/log what the worker role is doing (without using Intellitrace)? I have diagnostics enabled in the project, but it doesn't seem to be giving me the same level of detail as the Computer Emulator console. I've rerun the role and corresponding .NET application again on localhost and was unable to find any possible errors in the console.
Edit: The Next Best Thing
Falling back to manual logging, I implemented a class that would feed text files into my Azure Storage account. Here's the code:
public class EventLogger
{
public static void Log(string message)
{
CloudBlobContainer cbc;
cbc = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue("StorageClientAccount"))
.CreateCloudBlobClient()
.GetContainerReference("errors");
cbc.CreateIfNotExist();
cbc.GetBlobReference(string.Format("event-{0}-{1}.txt", RoleEnvironment.CurrentRoleInstance.Id, DateTime.UtcNow.Ticks)).UploadText(message);
}
}
Calling ErrorLogger.Log() will create a new text file and record whatever message you put in there. I found an example in the answer below.
There is no console for worker roles that I'm aware of. If diagnostics isn't giving you any help, then you need to get a little hacky. Try tracing out messages and errors to blob storage yourself. Steve Marx has a good example of this here http://blog.smarx.com/posts/printf-here-in-the-cloud
As he notes in the article, this is not for production, just to help you find your problem.