I have a question about giving access to k8s cluster. For example, new member joined our team. He created certificatesigningrequest and I approved it. Then created kubeconfig and give it to him to access our cluster. One day if he leave our team how can remove his access? I want he can not access to our cluster with this kubeconfig.
Imho you should use an external authentication provider. You can take a look at https://dexidp.io/docs/kubernetes/ which is an abstraction layer to other IDaaS-Providers like Azure, Google, Github and many more. For example, if your company uses Active Directory, you can control the access to the cluster using group memberships, where withdrawing access is then part of the company-wide leaver process.
Related
I am new to BigQuery and i'm trying to understand how VPC access works for BigQuery projects. I have a BigQuery project that imports data from several other BigQuery projects (no VPC but same organisation). I also need to connect to a project that is in a VPC network (still same organisation).
The only way that I can read this VPC project is to
Be a Gsuite member
Connect to the organisation VPN
Open the cloud console trough a proxy
I can only read the project and write queries if i'm in the VPC project itself
I want to be able to read and write queries for the VPC project in my own project
I want to be able to schedule data imports on daily aggregated data from the VPC project into my project.
Will this be possible if I add my project to a service perimeter and get access trough a perimeter bridge? What sort of access do I need to set up in order to read and import VPC project data directly in my project?
In this page you can find the BigQuery limitations when using VPC. Basically, if you want to use a service account to access a BigQuery instance protected by a service perimeter you must access from within the perimeter.
VPC Service Controls does not support copying BigQuery resources protected by a service perimeter to another organization. Access
levels do not enable you to copy across organizations.
To copy protected BigQuery resources to another organization, download the dataset (for example, as a CSV file), and then upload
that file to the other organization.
The BigQuery Data Transfer Service is supported only for the following services:
Campaign Manager
Google Ad Manager
Google Ads
Google Cloud Storage
Google Merchant Center
Google Play
YouTube
The BigQuery Classic Web UI is not supported. A BigQuery instance protected by a service perimeter cannot be accessed with the BigQuery
Classic Web UI.
The third-party ODBC driver for BigQuery cannot currently be used with the restricted VIP.
BigQuery audit log records do not always include all resources that were used when a request is made, due to the service internally
processing access to multiple resources.
When using a service account to access a BigQuery instance protected by a service perimeter, the BigQuery job must be run within a project
inside the perimeter. By default, the BigQuery client libraries will
run jobs within the service account or user's project, causing the
query to be rejected by VPC Service Controls.
I have a running Kubernetes cluster on Google Cloud Platform.
I want to deploy a postgres image to my cluster.
When selecting the image and my cluster, I get the error:
insufficient OAuth scope
I have been reading about it for a few hours now and couldn't get it to work.
I managed to set the scope of the vm to allow APIs:
Cloud API access scopes
Allow full access to all Cloud APIs
But from the GKE cluster details, I see that everything is disabled except the stackdriver.
Why is it so difficult to deploy an image or to change the scope?
How can I modify the cluster permissions without deleting and recreating it?
Easiest way is to delete and recreate the cluster because there is no direct way to modify the scopes of a cluster. However, there is a workaround. Create a new node pool with the correct scopes and make sure to delete any of the old node pools. The cluster scopes will change to reflect the new node pool.
More details found on this post
To resolve a few issues we are running into with docker and running multiple instances of some services, we need to be able to share values between running instances of the same docker image. The original solution I found was to create a storage account in Azure (where we are running our kubernetes instance that houses the containers) and a Key Vault in Azure, accessing both via the well defined APIs that microsoft has provided for Data Protection (detailed here).
Our architect instead wants to use Kubernetes Persitsent Volumes, but he has not provided information on how to accomplish this (he just wants to save money on the azure subscription by not having an additional storage account or key storage). I'm very new to kubernetes and have no real idea how to accomplish this, and my searches so far have not come up with much usefulness.
Is there an extension method that should be used for Persistent Volumes? Would this just act like a shared file location and be accessible with the PersistKeysToFileSystem API for Data Protection? Any resources that you could point me to would be greatly appreciated.
A PersistentVolume with Kubernetes in Azure will not give you the same exact functionality as Key Vault in Azure.
PesistentVolume:
Store locally on a mounted volume on a server
Volume can be encrypted
Volume moves with the pod.
If the pod starts on a different server, the volume moves.
Accessing volume from other pods is not that easy.
You can control performance by assigning guaranteed IOPs to the volume (from the cloud provider)
Key Vault:
Store keys in a centralized location managed by Azure
Data is encrypted at rest and in transit.
You rely on a remote API rather than a local file system.
There might be a performance hit by going to an external service
I assume this not to be a major problem in Azure.
Kubernetes pods can access the service from anywhere as long as they have network connectivity to the service.
Less maintenance time, since it's already maintained by Azure.
I'd like to limit the way Redis is used for the applications that access it, as well as for users so that not every application/user has access to all data.
Is there a way of doing access control on Redis so that applications are only allowed to access certain keys and not others?
Not at the moment. Currently Redis provides a single authorization password per Redis server, which grants full admin and data access to all (shared/numbered) databases in it.
Simple answer is No, This should be handled in the application end.
While Redis does not try to implement Access Control, it provides a tiny layer of authentication that is optionally turned on editing the redis.conf file.
-redis.io
As mentioned above redis provide only a tiny layer of authentication, everything else should be handled by the application.
I have a web farm in amazon and one of my sites need some caching.
I am considering the use of Elasticache redis.
Can anyone shed some ligth on how I would connect and interact with this cache?
I have read about several client sdks like stackexchange redis, service stack etc.
.NET is my preferred platform.
Can these client sdks be used to interact with redis on elasticache?
Anyone know about some documentation and/or code examples using elasticache redis (with the stackexchange redis sdk)?
Im guessing I will have to authenticate using a key / secret pair, is this supported in any of these client sdks?
thanks in advance!
Lars
Elasticache is connected to the same way you connect to any other Redis instance. Once you create a new Elasticache instance, you'll be given the hostname to connect to. No need for secret/key pair. All access to the Redis instance there is configured through security groups just like with other AWS instances in EC2, RDS, etc...
With that said, there are two important caveats:
You will only be able to connect to elasticache from within the region and/or VPC in which it's launched, even if you open up the security group to outside IPs (for me, this is one of the biggest reasons not to use Elasticache).
You cannot set a password on your Redis instance. Anyone on a box that is given access to the instance in security groups (keeping in mind the limitations from caveat 1) will be able to get access to your Redis instance with full rights to add/delete/modify whatever keys they like. This is the other big reason not to use Elasticache, though it certainly still has use-cases where these drawbacks are less important.