How to connect to Google cloud Storage via ADF using Service Account key - azure-data-factory-2

I am creating a Linked Service to connect to Google Cloud Storage and i am using following JSON file for Service account that has access on the Google Cloud Storage
I am using private_key_id as Access Key ID and private_key value as Secret Access Key and Service URL as https://storage.googleapis.com:
Linked Service is failing with following error:
How to create a Linked Service using this JSON file to connect to Google Cloud Storage?

Does this documentation help? https://learn.microsoft.com/en-us/azure/data-factory/connector-google-cloud-storage?tabs=data-factory#linked-service-properties

Related

gcp how to edit cloud api access scopes on running nodes in GKE

I have an issue. on existing cluster selected cloud api access scopes "Storage
Read Only" but cronjob must to push backups to cloud storage and got error 403. So how can i change it?

Use Google Storage Transfer API to transfer data from external GCS into my GCS

I am working on a web application which comprises of ReactJs frontend and Java SpringBoot backend. This application would require users to upload data from their own Google Cloud storage into my Google Cloud Storage.
The application flow will be as follows -
The frontend requests the user for read access on their storage. For this I have used oauth 2.0 access tokens as described here
The generated Oauth token will be passed to the backend.
The backend will also have credentials for my service account to allow it to access my Google Cloud APIs. I have created the service account with required permissions and generated the key using the instructions from here
The backend will use the generated access token and my service account credentials to transfer the data.
In the final step, I want to create a transfer job using the google Storage-Transfer API. I am using the Java API client provided here for this.
I am having difficulty providing the authentication credentials to the transfer api.
In my understanding, there are two different authentications required - one for reading the user's bucket and another for starting the transfer job and writing the data in my cloud storage. I haven't found any relevant documentation or working examples for my use-case. In all the given samples, it is always assumed that the same service account credentials will have access to both the source and sink buckets.
tl;dr
Does the Google Storage Transfer API allow setting different source and target credentials for GCS to GCS transfers? If yes, how does one provide these credentials to the transfer job specification.
Any help is appreciated. Thanks!
This is not allowed for the the GCS Transfer API unfortunately, for this to work it would be required that the Service Account have access to both the source and the sink buckets, as you mentioned.
You can try opening a feature request in Google's Issue Tracker if you'd like so that Google's Product Team can consider such a functionality for newer versions of the API, also you could mention that this is subject is not touched in the documentation, so it can be improved.

Accessing a GCS bucket from GCE without credentials using a S3 library

I am trying to migrate an existing application that was using IAM permissions to write to a S3 bucket from EC2. According to Google documentation, you have a way to keep the same code and take advantage of the compatibility of GCS apis with S3. However, using the same code (I am just overriding the endpoint to use storage.googleapis.com instead), I hit the following exception:
com.amazonaws.SdkClientException: The requested metadata is not found at http://169.254.169.254/latest/meta-data/iam/security-credentials/
at com.amazonaws.internal.EC2CredentialsUtils.readResource(EC2CredentialsUtils.java:115)
at com.amazonaws.internal.EC2CredentialsUtils.readResource(EC2CredentialsUtils.java:77)
at
Is there a way to do that without having to pass an access key and a secret key to my app?
If you want to keep using your existing API, the only way is by using a Google developer key, a simple migration always requires these two steps:
Change the request endpoint to to the Cloud Storage request endpoint: As you mentioned, you already completed this step by overriding to the Cloud Storage request endpoint:
https://storage.googleapis.com/[BUCKET_NAME]/[OBJECT_NAME]
Replace the AWS access and secret Key with your Google developer key:
Because you are no longer going to be able to keep using the same IAM permissions you have previously set on AWS, authorization must be done using and access key and and a secret key, you will need to include an Authorization request header using your Google access key and create a signature using your Google secret key:
Authorization: AWS GOOG-ACCESS-KEY:signature
For further information, please check Authenticating in a simple migration scenario.

Unable to create plateform for gcm on AWS

I have created plateform for ios app and it is created fine on aws server but when I have tried to create gcm plateform on sns section it is giving error. I have attached the image.
I got exactly same issue as yours. It seems google is migrating Firebase Cloud Messaging (FCM) to Google Cloud Messaging, and the API Key created via Credentials in API Manager of Google Cloud Platform is not working.
And here is how I get it to work.
Go to Firebase Console and import Google Cloud Project.
Go to Project settings on Firebase Console and you should see the Web API
Key of your project.
Go back to your Google Cloud Platform, and go to Credentials of API Manager, you should see there are two API keys have been generated. Browser key (auto created by Google Service) and Server key (auto created by Google Service)
The Server key (auto created by Google Service) is what you need to
use on the Amazon SNS.
Hope it can resolve your problem, and hope it is only a temporary solution that after Google done the migration, we can directly use the API key created in API Manager.
You should check if you are using the correct server key. It should be something like AIzaSyZ-1u...0GBYzPu7Udno5aA.

google cloud sql access denied for application deployed in google app engine

I have Google app engine application that uses google cloud sql, it works locally but when deployed on the google app engine cloud I get access denied. locally i have defined ip address to access sql cloud and for the app engine app the application id is define. is there any other config that is missing, why the code is not working deploy on the google cloud?
Are you using a password? Don't. When app is deployed in GAE and you use a password for cloud sql it will give you an access denied error. If you null out your password before you deploy, it will work. The password is only required if you are not using GAE.
There might be a few reasons that are preventing your GAE instance to connect to your GCS instance:
The App Engine app is not configured to have permissions to GCS
Your GCS url is not in the form: /cloudsql/your-project-id:your-instance-name
Your MySQL user was not created using the Cloud SQL administration console.
You are using a password to connect (you shouldn't)