I'm trying to run an automated build on my aws codebuild project. I have the following statement in my buildpsec.yaml file
aws eks update-kubeconfig --name ticket-api --region us-west-1 --role-arn arn:aws:iam::12345:role/service-role/CodeBuild-API-Build-service-role
I get the following error
An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:sts::12345:assumed-role/CodeBuild-API-Build-service-role/AWSCodeBuild-99c25416-7046-416e-b5d9-4bff1f4992f3 is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::12345:role/service-role/CodeBuild-API-Build-service-role
I'm not sure whats the role arn:aws:sts::12345:assumed-role/CodeBuild-API-Build-service-role/AWSCodeBuild-99c25416-7046-416e-b5d9-4bff1f4992f3 especially the part AWSCodeBuild-99c25416-7046-416e-b5d9-4bff1f4992f3. Is it something unique to the current user ?
Also I think the current user is already assigned to the role based on what its trying to assign.
If I run the same command without role aws eks update-kubeconfig --name ticket-api --region us-west-1 then it works but when I try kubectl version after that I get the following error
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Related
I have installed AWSCLIV2.MSI on my windows desktop.
After signing into my IAM account using secret key and key ID I try to upload a simple TXT document to my S3 Bucket using the following command:
aws s3 cp 4CLItest.txt s3://bucketname
After entering the command, I receive the following error:
aws s3 cp 4CLItest.txt s3://bucketname
upload failed: .\4CLItest.txt to s3://bucketname/4CLItest.txt
An error occurred (AccessDenied) when
calling the PutObject operation: Access Denied
I'm running the following version of AWS CLI:
aws-cli/2.10.0 Python/3.9.11 Windows/10 exe/AMD64 prompt/off
I am able to upload to this bucket if I use the browser with click and drag using the same IAM account.
However, I have a very large upload i'm needing to perform and would like to use this CLI through command prompt to do so.
Any help or advice would be greatly appreciated.
I am new to dbt and trying to get started. I have managed to configure profiles.yml to match Redshift IAM authentication profile, but running dbt debug after, gives the following error -
>Database Error
Unable to get temporary Redshift cluster credentials: An error occurred (ClusterNotFound) when calling the GetClusterCredentials operation: Cluster cluster_zzz not found.
However, I am able to connect fine from within DataGrip using the exact IAM configuration.
dbt documentaion I followed - https://docs.getdbt.com/reference/warehouse-profiles/redshift-profile
profiles.yml
risk:
target: dev
outputs:
dev:
type: redshift
method: iam
cluster_id: cluster_zzz
host: cluster_zzz.d4dcyl2bbxyz.eu-west-1.redshift.amazonaws.com
user: dbuser
iam_profile: risk
iam_duration_seconds: 900
autocreate: true
port: 5439
dbname: db_name
schema: dbname_schema
threads: 1
keepalives_idle: 0
Appreciate any help as it would help me move forward evaluating the tool. Thanks!
I managed to find the issue. I had to set the correct region in my aws config. As we are using aws-azure-login this setting didn't matter for my day to day logins
region=eu-west-1
Looking through the implementation of the IAM feature, helped me look in the right place
I wanted to use the gcloud cli to create an sql instance that is accessible on the default network. So I tried this:
gcloud beta sql instances create instance1 \
--network projects/peak-freedom-xxxxx/global/networks/default
And I get the error
ERROR: (gcloud.beta.sql.instances.create) [INTERNAL_ERROR] Failed to create subnetwork.
Please create Service Networking connection with service 'servicenetworking.googleapis.com'
from consumer project '56xxxxxxxxx' network 'default' again.
When you go to the console to create it, you can check Private IP you can see this:
And there's an "Allocate and connect" button. So I'm guessing that's what I need to do. But I can't figure out how to do that with the gcloud cli.
Can anyone help?
EDIT 1:
I've tried setting the --network to https://www.googleapis.com/compute/alpha/projects/testing-project-xxx/global/networks/default
Which resulted in
ERROR: (gcloud.beta.sql.instances.create) [INTERNAL_ERROR] Failed to
create subnetwork. Set Service Networking service account as
servicenetworking.serviceAgent role on consumer project
Then I tried recreating a completely new project and enabling the Service Networking API like so:
gcloud --project testing-project-xxx \
services enable \
servicenetworking.googleapis.com
And then creating the DB resulted in the same error. So I tried to manually add the servicenetworking.serviceAgent role and ran:
gcloud projects add-iam-policy-binding testing-project-xxx \
--member=serviceAccount:service-PROJECTNUMBER#service-networking.iam.gserviceaccount.com \
--role=roles/servicenetworking.serviceAgent
This succeeded with
Updated IAM policy for project [testing-project-xxx].
bindings:
- members:
- user:email#gmail.com
role: roles/owner
- members:
- serviceAccount:service-OJECTNUMBERRP#service-networking.iam.gserviceaccount.com
role: roles/servicenetworking.serviceAgent
etag: XxXxXX37XX0=
version: 1
But creating the DB failed with the same error. For reference, this is the command line I'm using to create the DB:
gcloud --project testing-project-xxx \
beta sql instances create instanceName \
--network=https://www.googleapis.com/compute/alpha/projects/testing-project-xxx/global/networks/default \
--database-version POSTGRES_11 \
--zone europe-north1-a \
--tier db-g1-small
the network name of form "projects/peak-freedom-xxxxx/global/networks/default" is for creating SQL instances under shared VPC network. if you want to create an instance in a normal VPC network you should use:
gcloud --project=[PROJECT_ID] beta sql instances create [INSTANCE_ID]
--network=[VPC_NETWORK_NAME]
--no-assign-ip
where [VPC_NETWORK_NAME] is of the form https://www.googleapis.com/compute/alpha/projects/[PROJECT_ID]/global/networks/[VPC_NETWORK_NAME]
for more information check here.
Note: you need to configure private service access for this and it's one time action only. follow instructions here to do so.
I created an application on existent OpenShift project by pulling a docker image from remote repo.
The pod is created but fails with STATUS "Crash Loop Back-off".
Invesitgating the reason using
oc log <pod id> -p
it appears a list of unsuccessfull "chown: changing ownership of '...': Operation not permitted
I found this is due to non predictable user id running the container.
According to
https://docs.openshift.com/container-platform/3.6/admin_guide/manage_scc.html and various post here and there,
it seems the solution is to relax security policy:
oc login -u system:admin https://<remote openshift endpoint>
oadm policy add-scc-to-group anyuid system:authenticated
If this is the solution, I don't know, because I cannot get out of 1st problem:
oc login -u system:admin
asks for login/pwd and after print an error
error: username system:admin is invalid for basic auth
I guess there is the need of a certificate, a token, something secure, but I cannot understand how to generate it from Openshift, or
if there was a key pair to generate locally (of which kind) and how to bind the key to the user. Furthermore, checking in the web console
I cannot see that kind of user (system:admin).
Am I missing something?
Thanks a lot,
Lorenzo
I've just created a new cluster using Google Container Engine running Kubernetes 1.7.5, with the new RBAC permissions enabled. I've run into a problem allocating permissions for some of my services which lead me to the following:
The docs for using container engine with RBAC state that the user must be granted the ability to create authorization roles by running the following command:
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin [--user=<user-name>]
However, this fails due to lack of permissions (which I would assume are the very same permissions which we are attempting to grant by running the above command).
Error from server (Forbidden):
User "<user-name>" cannot create clusterrolebindings.rbac.authorization.k8s.io at the cluster scope.:
"Required \"container.clusterRoleBindings.create\" permission."
(post clusterrolebindings.rbac.authorization.k8s.io)
Any help would be much appreciated as this is blocking me from creating the permissions needed by my cluster services.
Janos's answer will work for GKE clusters that have been created with a password, but I'd recommend avoiding using that password wherever possible (or creating your GKE clusters without a password).
Using IAM: To create that ClusterRoleBinding, the caller must have the container.clusterRoleBindings.create permission. Only the OWNER and Kubernetes Engine Admin IAM Roles contain that permission (because it allows modification of access control on your GKE clusters).
So, to allow person#company.com to run that command, they must be granted one of those roles. E.g.:
gcloud projects add-iam-policy-binding $PROJECT \
--member=user:person#company.com \
--role=roles/container.admin
If your kubeconfig was created automatically by gcloud then your user is not the all powerful admin user - which you are trying to create a binding for.
Use gcloud container clusters describe <clustername> --zone <zone> on the cluster and look for the password field.
Thereafter execute kubectl --username=admin --password=FROMABOVE create clusterrolebinding ...