GKE clusterrolebinding for cluster-admin fails with permission error - permissions

I've just created a new cluster using Google Container Engine running Kubernetes 1.7.5, with the new RBAC permissions enabled. I've run into a problem allocating permissions for some of my services which lead me to the following:
The docs for using container engine with RBAC state that the user must be granted the ability to create authorization roles by running the following command:
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin [--user=<user-name>]
However, this fails due to lack of permissions (which I would assume are the very same permissions which we are attempting to grant by running the above command).
Error from server (Forbidden):
User "<user-name>" cannot create clusterrolebindings.rbac.authorization.k8s.io at the cluster scope.:
"Required \"container.clusterRoleBindings.create\" permission."
(post clusterrolebindings.rbac.authorization.k8s.io)
Any help would be much appreciated as this is blocking me from creating the permissions needed by my cluster services.

Janos's answer will work for GKE clusters that have been created with a password, but I'd recommend avoiding using that password wherever possible (or creating your GKE clusters without a password).
Using IAM: To create that ClusterRoleBinding, the caller must have the container.clusterRoleBindings.create permission. Only the OWNER and Kubernetes Engine Admin IAM Roles contain that permission (because it allows modification of access control on your GKE clusters).
So, to allow person#company.com to run that command, they must be granted one of those roles. E.g.:
gcloud projects add-iam-policy-binding $PROJECT \
--member=user:person#company.com \
--role=roles/container.admin

If your kubeconfig was created automatically by gcloud then your user is not the all powerful admin user - which you are trying to create a binding for.
Use gcloud container clusters describe <clustername> --zone <zone> on the cluster and look for the password field.
Thereafter execute kubectl --username=admin --password=FROMABOVE create clusterrolebinding ...

Related

How to create a service networking creation from a gcp project to "default" network

I wanted to use the gcloud cli to create an sql instance that is accessible on the default network. So I tried this:
gcloud beta sql instances create instance1 \
--network projects/peak-freedom-xxxxx/global/networks/default
And I get the error
ERROR: (gcloud.beta.sql.instances.create) [INTERNAL_ERROR] Failed to create subnetwork.
Please create Service Networking connection with service 'servicenetworking.googleapis.com'
from consumer project '56xxxxxxxxx' network 'default' again.
When you go to the console to create it, you can check Private IP you can see this:
And there's an "Allocate and connect" button. So I'm guessing that's what I need to do. But I can't figure out how to do that with the gcloud cli.
Can anyone help?
EDIT 1:
I've tried setting the --network to https://www.googleapis.com/compute/alpha/projects/testing-project-xxx/global/networks/default
Which resulted in
ERROR: (gcloud.beta.sql.instances.create) [INTERNAL_ERROR] Failed to
create subnetwork. Set Service Networking service account as
servicenetworking.serviceAgent role on consumer project
Then I tried recreating a completely new project and enabling the Service Networking API like so:
gcloud --project testing-project-xxx \
services enable \
servicenetworking.googleapis.com
And then creating the DB resulted in the same error. So I tried to manually add the servicenetworking.serviceAgent role and ran:
gcloud projects add-iam-policy-binding testing-project-xxx \
--member=serviceAccount:service-PROJECTNUMBER#service-networking.iam.gserviceaccount.com \
--role=roles/servicenetworking.serviceAgent
This succeeded with
Updated IAM policy for project [testing-project-xxx].
bindings:
- members:
- user:email#gmail.com
role: roles/owner
- members:
- serviceAccount:service-OJECTNUMBERRP#service-networking.iam.gserviceaccount.com
role: roles/servicenetworking.serviceAgent
etag: XxXxXX37XX0=
version: 1
But creating the DB failed with the same error. For reference, this is the command line I'm using to create the DB:
gcloud --project testing-project-xxx \
beta sql instances create instanceName \
--network=https://www.googleapis.com/compute/alpha/projects/testing-project-xxx/global/networks/default \
--database-version POSTGRES_11 \
--zone europe-north1-a \
--tier db-g1-small
the network name of form "projects/peak-freedom-xxxxx/global/networks/default" is for creating SQL instances under shared VPC network. if you want to create an instance in a normal VPC network you should use:
gcloud --project=[PROJECT_ID] beta sql instances create [INSTANCE_ID]
--network=[VPC_NETWORK_NAME]
--no-assign-ip
where [VPC_NETWORK_NAME] is of the form https://www.googleapis.com/compute/alpha/projects/[PROJECT_ID]/global/networks/[VPC_NETWORK_NAME]
for more information check here.
Note: you need to configure private service access for this and it's one time action only. follow instructions here to do so.

Prevent a user from deleting BigQuery tables

We're trying to create a very basic role that allows users to query BigQuery tables, but not delete them. The custom role we're experimenting with now has the following permissions:
- bigquery.jobs.create
- bigquery.jobs.get
- bigquery.jobs.list
- bigquery.jobs.listAll
- bigquery.readsessions.create
- bigquery.routines.get
- bigquery.routines.list
- bigquery.savedqueries.get
- bigquery.savedqueries.list
- bigquery.tables.export
- bigquery.tables.getData
- bigquery.tables.list
- bigquery.transfers.get
- resourcemanager.projects.get
We're only focusing on delete at this time, so the permissions list is a work in progress. There is only one custom role assigned to our test user with the above permissions. However, the user can delete tables from our BigQuery dataset. Any idea on the correct combinations of permissions to achieve our objective.
Thanks in advance!
You have listed 14 permissions and seem to be making an assumption these permissions allow BQ table deletion.
This assumption looks odd (because clearly the permission bigquery.tables.delete is not on the list) and in fact is incorrect. Which means the GCP IAM identity (a user or a service account) assigned the role comprised of these 14 permissions will be unable to delete BQ tables. This in turn means the identity you are testing with is assigned additional role(s) and/or permission(s) that are not accounted for.
To prove the assumption is incorrect open BQ Console as a project administrator and click on the Cloud Shell icon to start Cloud Shell VM. Then execute the following commands at the command prompt replacing <project-name>:
# Prove the current user is BQ admin by creating 'ds_test1' dataset,
# 'tbl_test1' table, then deleting and recreating the table
bq mk ds_test1
bq mk -t ds_test1.tbl_test1
bq rm -f -t ds_test1.tbl_test1
bq mk -t ds_test1.tbl_test1
# Create role `role_test1`
gcloud iam roles create role_test1 --project <project-name> --title "Role role_test1" --description "My custom role role_test1" --permissions bigquery.jobs.create,bigquery.jobs.get,bigquery.jobs.list,bigquery.jobs.listAll,bigquery.readsessions.create,bigquery.routines.get,bigquery.routines.list,bigquery.savedqueries.get,bigquery.saved
queries.list,bigquery.tables.export,bigquery.tables.getData,bigquery.tables.list,bigquery.transfers.get,resourcemanager.projects.get --stage GA
# Create service account 'sa-test1'
# It is a good security practice to dispose of it when testing is finished
gcloud iam service-accounts create sa-test1 --display-name "sa-test1" --description "Test SA sa-test1, delete it when not needed anymore" --project <project-name>
# Grant the role (and its permissions) to the service account
gcloud projects add-iam-policy-binding <project-name> --member=serviceAccount:sa-test1#<project-name>.iam.gserviceaccount.com --role projects/<project-name>/roles/role_test1
# Save the credential of the service account (including the security sensitive
# private key) to a disk file
gcloud iam service-accounts keys create ~/key-sa-test1.json --iam-account sa-test1#<project-name>.iam.gserviceaccount.com
# Impersonate the service account. This replaces the current permissions with
# that of the service account
gcloud auth activate-service-account sa-test1#<project-name>.iam.gserviceaccount.com --key-file=./key-sa-test1.json
# Confirm the ability to list tables
bq ls ds_test1
# Confirm inability to delete tables
# The command fails with error: BigQuery error in rm operation: Access Denied: Table <project-name>:ds_test1.tbl_test1: User does not have bigquery.tables.delete permission for table <project-name>:ds_test1.tbl_test1.
bq rm -f -t ds_test1.tbl_test1
# Close SSH connection to the VM and logoff
exit
To see the roles granted to the service account 'sa-test1' created above open Cloud Shell and execute:
gcloud projects get-iam-policy <project-name> --flatten="bindings[].members" --filter="bindings.members:serviceAccount:sa-test1#<project-name>.
iam.gserviceaccount.com"
It should list our role projects/<project-name>/roles/role_test1.
To see the roles granted to the user who can delete tables execute:
gcloud projects get-iam-policy <project-name> --flatten="bindings[].members" --filter="bindings.members:user:<email-of-the-user>"
I did some tests on my end.
When an user has the 14 listed permissions, they are not even able to see the BigQuery Datasets on the UI. To do so, the bigquery.datasets.get permission must be added to the custom role.
Even with the 15 permissions, they are unable to Delete BigQuery tables so you are in the right path.
Being able to delete tables indicates that the user does not have the created Custom role assigned or has more permissions from additional roles. Please:
Check that the Roles have been set correctly (my scenario with the 15 permissions). Be sure to save changes when assigning permissions to your Custom roles.
In your IAM Dashboard please double check that the user has this role linked to their account.
Also check if the user does not have additional roles like Owner, Editor, BigQuery Admin, BigQuery Data Editor, etc. If they have any of those extra roles, their permissions are making them able to delete BigQuery tables.
Finally, double check who is logged into the UI, you can check it by clicking on the photo at the top right corner of your GCP UI. The user should not see an account different to myUser#emaildomain.com like in the following image
Hope this is helpful!

Can't create bucket without authentication

We updated our Couchbase from 4.6 Community edition to 5.0.0-2873 Enterprise Edition for testing purposes and our software using the java-client started throwing InvalidPasswordException when trying to open a bucket.
As I've found, every newly created bucket has authType='sasl' and a randomly generated saslPassword.
I've tried creating a bucket using the CLI instead of the GUI:
couchbase-cli bucket-create -c localhost:8091 -u Administrator -p password --bucket=general --bucket-ramsize=1300 --bucket-type=couchbase --bucket-password=
I got the following error:
ERROR: unrecognized arguments: --bucket-password=password
I also tried the bucket-edit function with the same result.
According to the documentation the argument should be valid.
I also tried using the REST API to change bucket authentication (and similarly password), but even though this didn't throw any erros, the authType and the password remained the same.
curl -X POST -u Administrator:password -d 'authType=none' http://<host>:8091/pools/default/buckets/general
Again, according to the documentation this should work.
If I query the bucket information for the sasl password and provide that for the openBucket function then the connection works, however we really don't want to use this feature in our system.
So, any other ideas how it would be possible to remove the bucket authentication in our 5.0EE Couchbase setup?
In Couchbase 5.0 we no longer support bucket passwords and have moved to using role based access control when connecting to buckets. This means that in 5.0 the standard (pre-production) way to connect to a bucket is by using the Administrator user and password that you created when setting up the cluster. In case you're unsure what the Administrator user is, it is the user you create when you first go through the Couchbase setup wizard or the it is the username and password you specify on the command line when running the couchbase-cli cluster-init command.
One thing to note is that using the Administrator user/password is the standard pre-production workflow. I would recommend that when you go into production you create separate users for your application which only have access to cluster resources they need to access in the cluster. You can do this by going to the Users tab in the Administration Console and creating a new user and giving them the Full Bucket Access role which is the standard role that applications should have.
You might now be saying to yourself that this all sounds great, but when I use the Administrator user/password I still am having issues. If this is the case the reason is because you have Couchbase 5.0, but your SDK is not new enough to handle the new RBAC authentication mechanism in 5.0. The workaround for this is to create a user in the Users tab with the same name as the bucket and give that user the Full Bucket Access role. You can then use this user to authenticate.
One last thing to mention is that during an upgrade from a pre-5.0 cluster to a 5.0 cluster Couchbase will automatically create a user for each bucket. The each user will have the same name as one of the buckets and the password for that user will correspond to the bucket password. This is done mainly to ensure that there is no application downtime during an upgrade. After upgrading the cluster the next step should ideally be to upgrade the Couchbase client library to have it start using RBAC authentication.
If you need to stay with old approach and no password you can use cochbase-cli with --rbac-username and --rbac-password "", but you need to specify password as "", e.g.
./couchbase-cli user-manage -c localhost:8091 -u Admin -p password --set --rbac-username <UserForBucket> --roles bucket_full_access[<BucketName>] --rbac-password "" --auth-domain local

Google cloud dataproc failing to create new cluster with initialization scripts

I am using the below command to create data proc cluster:
gcloud dataproc clusters create informetis-dev
--initialization-actions “gs://dataproc-initialization-actions/jupyter/jupyter.sh,gs://dataproc-initialization-actions/cloud-sql-proxy/cloud-sql-proxy.sh,gs://dataproc-initialization-actions/hue/hue.sh,gs://dataproc-initialization-actions/ipython-notebook/ipython.sh,gs://dataproc-initialization-actions/tez/tez.sh,gs://dataproc-initialization-actions/oozie/oozie.sh,gs://dataproc-initialization-actions/zeppelin/zeppelin.sh,gs://dataproc-initialization-actions/user-environment/user-environment.sh,gs://dataproc-initialization-actions/list-consistency-cache/shared-list-consistency-cache.sh,gs://dataproc-initialization-actions/kafka/kafka.sh,gs://dataproc-initialization-actions/ganglia/ganglia.sh,gs://dataproc-initialization-actions/flink/flink.sh”
--image-version 1.1 --master-boot-disk-size 100GB --master-machine-type n1-standard-1 --metadata "hive-metastore-instance=g-test-1022:asia-east1:db_instance”
--num-preemptible-workers 2 --num-workers 2 --preemptible-worker-boot-disk-size 1TB --properties hive:hive.metastore.warehouse.dir=gs://informetis-dev/hive-warehouse
--worker-machine-type n1-standard-2 --zone asia-east1-b --bucket info-dev
But Dataproc failed to create cluster with following errors in failure file:
cat
+ mysql -u hive -phive-password -e '' ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost' (111)
+ mysql -e 'CREATE USER '\''hive'\'' IDENTIFIED BY '\''hive-password'\'';' ERROR 2003 (HY000): Can't connect to MySQL
server on 'localhost' (111)
Does anyone have any idea behind this failure ?
It looks like you're missing the --scopes sql-admin flag as described in the initialization action's documentation, which will prevent the CloudSQL proxy from being able to authorize its tunnel into your CloudSQL instance.
Additionally, aside from just the scopes, you need to make sure the default Compute Engine service account has the right project-level permissions in whichever project holds your CloudSQL instance. Normally the default service account is a project editor in the GCE project, so that should be sufficient when combined with the sql-admin scopes to access a CloudSQL instance in the same project, but if you're accessing a CloudSQL instance in a separate project, you'll also have to add that service account as a project editor in the project which owns the CloudSQL instance.
You can find the email address of your default compute service account under the IAM page for your project deploying Dataproc clusters, with the name "Compute Engine default service account"; it should look something like <number>#project.gserviceaccount.com`.
I am assuming that you already created the Cloud SQL instance with something like this, correct?
gcloud sql instances create g-test-1022 \
--tier db-n1-standard-1 \
--activation-policy=ALWAYS
If so, then it looks like the error is in how the argument for the metadata is formatted. You have this:
--metadata "hive-metastore-instance=g-test-1022:asia-east1:db_instance”
Unfortuinately, the zone looks to be incomplete (asia-east1 instead of asia-east1-b).
Additionally, with running that many initializayion actions, you'll want to provide a pretty generous initialization action timeout so the cluster does not assume something has failed while your actions take awhile to install. You can do that by specifying:
--initialization-action-timeout 30m
That will allow the cluster to give the initialization actions 30 minutes to bootstrap.
By the time you reported, it was detected an issue with cloud sql proxy initialization action. It is most probably that such issue affected you.
Nowadays, it shouldn't be an issue.

Openshift Origin Latest Project creation issue

I am unable to create project in open shift. I created a project previously and deleted it. Looks like a project exists but unable to access or delete it. Seems like i am stuck. Also logging into the console https://console.preview.openshift.com/console/ doesn't show any existing projects.
I ran the following oc commands from the terminal.
Any suggestions on how to resolve this issue?
Thanks
XX:~ XX$ oc new-project test
Error from server: projectrequests "test" is forbidden: user XX cannot create more than 1 project(s).
XX:~ XX$ oc delete project test
Error from server: User "XX" cannot delete projects in project "test"
XX:~ XX$ oc status
Error from server: User "XX" cannot get projects in project "default"
XX:~ XX$ oc get projects
You need to give privileges/policies to your user which will allow the actions you want to perform.
If you are just in a proof-of-concept environment I would recommend the make your user cluster-admin in the whole cluster. This will give all the possible privileges to your user. Of course this in't recommended for every user in a 'real' environment.
First you need to authenticate with the 'default admin' which is created after the installation. This default admin-user isn't working with the normal user/password authentication. It's using a client certificate.
oc login -u system:admin --config=/etc/origin/master/admin.kubeconfig
Now you will see a list of the available projects (default, openshift management, etc). Now you're able to give cluster-roles to other users.
Make your user cluster-admin over the whole cluster
oadm policy add-cluster-role-to-user cluster-admin (youruser)
Now you have the cluster-admin privileges inside the whole cluster. You are also able to give privileges for some user in a specific project and not in the whole cluster. Than you have to use:
oadm policy add-role-to-user <role> <username> (in the current project)
This will give the role to a user, but only inside the project from where you've performed this command.
For more information about the avaiable cluster roles and policies I will point to the official documentation.
I raised a defect with Openshift Team as pointed out in the Support Link.
https://docs.openshift.com/online/getting_started/devpreview_faq.html#devpreview-faq-support
Here is the response i received from Support Team.
It seems that you have issued a bug and followed up for this already:
https://bugzilla.redhat.com/show_bug.cgi?id=1368862
After the cause is investigated, our operations team will sure clean up the project manually for you to allow you continue working with the developer preview
Latest update:
The project has now been cleaned up and you should be able to create a new project.
I am able to create Project in Openshift now.