Why does Spinnaker need to list namespaces when the namespaces are already known to it? - spinnaker

Kubernetes namespaces are configured before Spinnaker is even deployed, so Spinnaker should be able to deploy into them in a namespace-restricted enterprise environment. But this answer says Spinnaker will not run in that setting: Spinnaker with restricted namspace access
Why does Spinnaker require read access to namespaces when those names are already known to it?
Why does the error response contain the name of the namespace that it is trying to list?
I forked halyard so that it uses client.pods().list() to verify the k8 connection and it is able to deploy Spinnaker. Spinnaker seems to work as long as it takes namespace names from its cache. When it uses live-manifest-calls or refreshes its cache, namespace pulldowns stop working.

You don't need it actulally. Just proper configuration for Halyard and Spinnaker.
See instruction.
Configure Spinnaker to install in Kubernetes
Important: This will by default limit your Spinnaker to deploying to the namespace specified. If you want to be able to deploy to other namespaces, either add a second cloud provider target or remove the --namespaces flag.
Use the Halyard hal command line tool to configure Halyard to install Spinnaker in your Kubernetes cluster
hal config deploy edit \
--type distributed \
--account-name ${ACCOUNT_NAME} \
--location ${NAMESPACE}

Related

Google cloud kubernetes cluster newbie question

I am a newbie of GKE. I created a GKE cluster with very simple setup. It only has on gpu node and all other stuff was default. After the cluster is up, I was able to list the nodes and ssh into the nodes. But I have two questions here.
I tried to install nvidia driver using the command:
kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml
It output that:
kubectl apply --filename https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml
daemonset.apps/nvidia-driver-installer configured
But 'nvidia-smi' cannot be found at all. Should I do something else to make it work?
On the worker node, there wasn't the .kube directory and the file 'config'. I had to copy it from the master node to the worker node to make things work. And the config file on the master node automatically updates so I have to copy again and again. Did I miss some steps in the creation of the cluster or how to resolve this problem?
I appreciate someone can shed some light on this. It drove me crazy after working on it for several days.
Tons of thanks.
Alex.
For the DaemonSet to work, you need to have a tag on your worker Node as cloud.google.com/gke-accelerator (see this line). The DaemonSet checks for this tag on a node before scheduling any pods for installing the driver. I'm guessing a default node pool you create did not have this tag on it. You can find more details on this on the GKE docs here.
The worker nodes, by design are just that worker nodes. They do not need privileged access to the Kubernetes API so they don't need any kubeconfig files. The communication between worker nodes and the API is strictly controlled through the kubelet binary running on the node. Therefore, you will never find kubeconfig files on a worker node. Also, you should never put them on the worker node either, since if a node gets compromised, the keys in that file can be used to damage the API Server. Instead, you should make it a habit to either use the master nodes for kubectl commands, or better yet, have the kubeconfig on your local machine, and keep it safe, and issue commands remotely to your cluster.
After all, all you need is access to an API endpoint for your Kubernetes API server, and it shouldn't matter where you access it from, as long as the endpoint is reachable. So, there is no need whatsoever to have kubeconfig on the worker nodes :)

TLS communication between ignite pods

I build ignite cluster in my kubernetes VM. I want to configure my ignite pods to work with tls without certificate validation.
I created a key store manually in each pod which are binary files how can I add them to be created as part of the chart deploy?
should I create the file before and add to configmap? or run a shell command during build to create them?
can you please assist I'm new to kubernetes and SSL/TLS
You need to configure your node to use appropriate ssl/tls settings per this guide: https://ignite.apache.org/docs/latest/security/ssl-tls
docs for using a configmap to create an ignite configuration file for a node: https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-configmap-for-node-configuration-file
You could set up the ssl/tls files as configmaps, or alternatively, use a stateful set and create a persistentvolume to hold the files. https://kubernetes.io/docs/concepts/storage/persistent-volumes/
See the tab on https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-pod-configuration for instructions on how to mount persistent volumes for an Ignite Stateful set.

How to check the if config command name is changed in AWS Elasticache(REDIS)

I am trying to access AWS elasticache(REDIS). I followed this instruction:
https://redsmin.uservoice.com/knowledgebase/articles/734646-amazon-elasticache-and-redsmin
Redis is connected now but when I click on configuration. I got this error:
"Redsmin can't load the configuration. Check with your provider that you have access to the configuration command."
edit 1:
config Redis command is sadly not available on AWS Elasticache, see their documentation:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/RestrictedCommands.html
To deliver a managed service experience, ElastiCache restricts access to certain cache engine-specific commands that require advanced privileges. For cache clusters running Redis, the following commands are unavailable:
[...]
config
That's why Redsmin configuration module (it's the only module impacted) cannot display current your Redis AWS Elasticache configuration.

Which s3 compatible blob storage?

I want deploy a s3 compatible blob storage in my Kubernetes Cluster. I already use GlusterFS for volumes like mongodb, and I tried to set up minio with the helm chart https://github.com/helm/charts/tree/master/stable/minio. I just realize I can't scale up minio easily because of erasure code.
So I have some questions about blob storage solutions :
Is GlusterFS blob storage service stable and reliable (https://github.com/gluster/gluster-kubernetes/tree/master/docs/examples/gluster-s3-storage-template) ?
Do I must use OpenShift to deploy GlusterFS blob storage as I read in the web ? I think no because I can see simple Kubernetes manifests in the GlusterFS repo like this one : https://github.com/gluster/gluster-kubernetes/blob/master/deploy/kube-templates/gluster-s3-template.yaml.
Is it easy to use Minio federation in Kubernetes ? Is it easily scalable with a "helm upgrade --set replicas=X" or do I need manually upgrade minio configuration ?
As you can see, I feel lost with this s3 storage. So if you have more information/solutions, do not hesitate.
Thanks in advance !
About reliability you should read more about user experience like:
An end user review of GlusterFS
Community Survey Feedback, 2019
Why openshift with glusterFS:
For standalone Red Hat Gluster Storage, there is no component installation required to use it with OpenShift Container Platform. OpenShift Container Platform comes with a built-in GlusterFS volume driver, allowing it to make use of existing volumes on existing clusters but Red Hat Gluster Storage is a commercial storage software product, based on Gluster.
How to deploy it in AWS
For minio please follow official docs:
ConfigMap allows injecting containers with configuration data even while a Helm release is deployed.
To update your MinIO server configuration while it is deployed in a release, you need to
Check all the configurable values in the MinIO chart using helm inspect values stable/minio.
Override the minio_server_config settings in a YAML formatted file, and then pass that file like this helm upgrade -f config.yaml stable/minio.
Restart the MinIO server(s) for the changes to take effect
I didn't try but, but as per documentation:
For federation I can see additional environment variables in the values.yaml.
In addition you should Run MinIO in federated mode Federation Quickstart Guide
Here you can find differences between google and amazon s3 sotrage
or Cloud Storage interoperability from gcloud perspective.
Hope this help.

Spinnaker AWS Provider not allowing create cluster

Deployed Spinnaker in AWS to run a test in the same account. However unable to configure server groups. If I click create the task is queued with the account configured via hal on the CLI. Anyway to troubleshoot this, the logs are looking light.
Storage backend needs to be configured correctly.
https://www.spinnaker.io/setup/install/storage/