gcp how to edit cloud api access scopes on running nodes in GKE - api

I have an issue. on existing cluster selected cloud api access scopes "Storage
Read Only" but cronjob must to push backups to cloud storage and got error 403. So how can i change it?

Related

Rest API call for kubeflow in GCP

I have a pipeline deployed through gcp cloud based Kubeflow. Now I want to manage the pipeline from outside the google account i.e from different google account or from a local host. I want to manage the pipeline through Rest API calls. But I am getting error while trying. Do I need the details of google service account? if so, how to pass that information? Kindly help me solve these issue.
Once again I am stating the problem statement, need to access the pipeline through Rest API calls, which is in cloud based Kubeflow.

Use Google Storage Transfer API to transfer data from external GCS into my GCS

I am working on a web application which comprises of ReactJs frontend and Java SpringBoot backend. This application would require users to upload data from their own Google Cloud storage into my Google Cloud Storage.
The application flow will be as follows -
The frontend requests the user for read access on their storage. For this I have used oauth 2.0 access tokens as described here
The generated Oauth token will be passed to the backend.
The backend will also have credentials for my service account to allow it to access my Google Cloud APIs. I have created the service account with required permissions and generated the key using the instructions from here
The backend will use the generated access token and my service account credentials to transfer the data.
In the final step, I want to create a transfer job using the google Storage-Transfer API. I am using the Java API client provided here for this.
I am having difficulty providing the authentication credentials to the transfer api.
In my understanding, there are two different authentications required - one for reading the user's bucket and another for starting the transfer job and writing the data in my cloud storage. I haven't found any relevant documentation or working examples for my use-case. In all the given samples, it is always assumed that the same service account credentials will have access to both the source and sink buckets.
tl;dr
Does the Google Storage Transfer API allow setting different source and target credentials for GCS to GCS transfers? If yes, how does one provide these credentials to the transfer job specification.
Any help is appreciated. Thanks!
This is not allowed for the the GCS Transfer API unfortunately, for this to work it would be required that the Service Account have access to both the source and the sink buckets, as you mentioned.
You can try opening a feature request in Google's Issue Tracker if you'd like so that Google's Product Team can consider such a functionality for newer versions of the API, also you could mention that this is subject is not touched in the documentation, so it can be improved.

How we can use dynamo db local to save cognito ID of authenticate users

Is there any specific way to create a Cognito Identity Pool in a Amazon DynamoDB local setup? I have already setup the javascript shell and created several tables and have queried it. I need to provide a authenticated mode user login (Facebook, Amazon, Google) for my node application. I found several tutorials about how to set it up using AWS DynamoDB, but I need to know how I can create it using a local DynamoDB without accessing AWS DynamoDB.
Amazon DynamoDB local doesn't validate credentials, so it doesn't matter how you set up the Amazon Cognito identity pool or the roles for the pool. You will be able to interact with the CognitoCredentials object the same way if you are using Amazon DynamoDB or DynamoDB local.
It is important to note that you will not hoever be able to validate fine-grained access control unless you use the full service, again because DynamoDB local doesn't validate credentials.

google cloud sql access denied for application deployed in google app engine

I have Google app engine application that uses google cloud sql, it works locally but when deployed on the google app engine cloud I get access denied. locally i have defined ip address to access sql cloud and for the app engine app the application id is define. is there any other config that is missing, why the code is not working deploy on the google cloud?
Are you using a password? Don't. When app is deployed in GAE and you use a password for cloud sql it will give you an access denied error. If you null out your password before you deploy, it will work. The password is only required if you are not using GAE.
There might be a few reasons that are preventing your GAE instance to connect to your GCS instance:
The App Engine app is not configured to have permissions to GCS
Your GCS url is not in the form: /cloudsql/your-project-id:your-instance-name
Your MySQL user was not created using the Cloud SQL administration console.
You are using a password to connect (you shouldn't)

How to share Amazon AWS credentials (S3, EC2, etc)?

I have a personal Amazon account which I use to do a lot of shopping. I also recently linked this account to AWS. Now at work, some guys are doing experiments with Amazon using my account. How can I let them access the admin console, etc without giving them my Amazon credentials. I am not willing to share my Amazon shopping history or other things I use on Amazon, just the cloud services such as EC2 and S3.
What they need is access to the full admin console, and any monitoring tools on AWS.
Use AWS Identity and Access Management (IAM).
AWS Identity and Access Management (IAM) enables you to securely
control access to AWS services and resources for your users. Using IAM
you can create and manage AWS users and groups and use permissions to
allow and deny their permissions to AWS resources