Istio on Azure Container Service (AKS) - azure-container-service

I was planning on installing Istio on my new AKS cluster. However, in the prerequisites for Istio, it is mentioned that K8S cluster should have RBAC enabled. However, I read that AKS ( preview ) doesn't have it enabled. Is this true? Is there an option for me to try Istio on AKS.

AKS is GA and looks like RBAC is available now,
https://azure.microsoft.com/en-us/blog/azure-kubernetes-service-aks-ga-new-regions-new-features-new-productivity/

In fact RBAC is not available in Azure AKS currently. According to this GitHub issue it is on the roadmap for Q1 2018.
In Azure you can use ACS which is an older version of AKS, but with control over Kubernetes master or acs-engine where you have full control over Kubernetes cluster.

AKS now enables RBAC by default.
There are also docs on how to install Istio:
https://learn.microsoft.com/en-us/azure/aks/istio-install

We are doing POC for service mesh on our AKS cluster using istio. I have found a very good guide to install istio with all its components on AKS cluster and it does not require any RBAC on AKS. Infact this guide is cloud agnostic. I am not sure if it production graded ready but working like a charm till now. Just apply first 3 files and 4th one optional. The name might be little confusing for you. But its working on AKS very well. Hope that worsk for you.
Istio Installation Files
kubectl apply -f 1-istio-init.yaml 2-istio-minikube.yaml 3-kiali-secret.yaml

Related

How does Apache Ignite deploy in K8S?

On the Ignite website, I see that in Amazon EKS, Microsoft Azure Kubernetes Service Deployment, and Google Kubernetes Engine Deployment, deploy on each of the three platforms ignite.If I am on my own deployed K8S, can I deploy?Is it the same as deploying the Ignite on three platforms?
Sure, just skip the initial EKS/Azure initialization steps since you don't need them and move directly to the K8s configuration.
Alternatively, you might try Apache Ignite and GridGain k8s operator that simplifies the deployment.

Which s3 compatible blob storage?

I want deploy a s3 compatible blob storage in my Kubernetes Cluster. I already use GlusterFS for volumes like mongodb, and I tried to set up minio with the helm chart https://github.com/helm/charts/tree/master/stable/minio. I just realize I can't scale up minio easily because of erasure code.
So I have some questions about blob storage solutions :
Is GlusterFS blob storage service stable and reliable (https://github.com/gluster/gluster-kubernetes/tree/master/docs/examples/gluster-s3-storage-template) ?
Do I must use OpenShift to deploy GlusterFS blob storage as I read in the web ? I think no because I can see simple Kubernetes manifests in the GlusterFS repo like this one : https://github.com/gluster/gluster-kubernetes/blob/master/deploy/kube-templates/gluster-s3-template.yaml.
Is it easy to use Minio federation in Kubernetes ? Is it easily scalable with a "helm upgrade --set replicas=X" or do I need manually upgrade minio configuration ?
As you can see, I feel lost with this s3 storage. So if you have more information/solutions, do not hesitate.
Thanks in advance !
About reliability you should read more about user experience like:
An end user review of GlusterFS
Community Survey Feedback, 2019
Why openshift with glusterFS:
For standalone Red Hat Gluster Storage, there is no component installation required to use it with OpenShift Container Platform. OpenShift Container Platform comes with a built-in GlusterFS volume driver, allowing it to make use of existing volumes on existing clusters but Red Hat Gluster Storage is a commercial storage software product, based on Gluster.
How to deploy it in AWS
For minio please follow official docs:
ConfigMap allows injecting containers with configuration data even while a Helm release is deployed.
To update your MinIO server configuration while it is deployed in a release, you need to
Check all the configurable values in the MinIO chart using helm inspect values stable/minio.
Override the minio_server_config settings in a YAML formatted file, and then pass that file like this helm upgrade -f config.yaml stable/minio.
Restart the MinIO server(s) for the changes to take effect
I didn't try but, but as per documentation:
For federation I can see additional environment variables in the values.yaml.
In addition you should Run MinIO in federated mode Federation Quickstart Guide
Here you can find differences between google and amazon s3 sotrage
or Cloud Storage interoperability from gcloud perspective.
Hope this help.

What is the redis URI, when redis is used in kubernetes?

Objective
I want to access the redis database in kubernetes, from a function inside ibm functions using javascript.
Question
How do I get the right URI, when redis is running on a Pod in Kubernetes?
Situation
I used this sample to setup the redis database in kubernetes This is the link to the sample in Kubernetes
I run Kuberentes inside IBM Cloud.
Findings
I was not able to find a answer to my question on the redis documentation
As far as I understand by default no password configured.
Is this assumption right?
redis://[USER]:[PASSWORD]#[CLUSTER-PUBLIC-IP]:[PORT]
Thanks for help ... I know this is maybe a to simple question, but currently I do not see the tree in the woods ;-)
As far as I understand by default no password configured.
Yes, there is no default password in that image with Redis, you are right.
If you following the instruction you mentioned, you will use a kubectl proxy, which will forward port of your Redis in cluster to your local machine by call kubectl port-forward redis-master 6379:6379.
So in that case, Redis will be available on redis://localhost:6379 on your PC.
If you want to make it available directly from ouside of the cluster, you need to create Service with NodePort, Service with LoadBalancer (if you in Cloud) or simply Service with Ingress.
Inside a cluster, you can create Service with Cluster IP (which is actually simply Service, because it always has Cluster IP) for your Redis pod and will be available on:
redis://[USER]:[PASSWORD]#[SERVICE-IP]:[PORT]
Here is a good official documentation about connecting applications with service.

Creating a kubernetes cluster on GCP using Spinnaker

For end to end devops automation I want to have an environment on demand. For this I need to Spun up and environment on kubernetes which is eventually hosted on GCP.
My Use case
1. Developer Checks in the code in feature branch
2. Environment in Spun up on Google Cloud with Kubernetes
3. Application gets deployed on Kubernetes
4. Gets tested and then the environment gets destroyed.
I am able to do everything with Spinnaker except #2. i.e create Kube Cluster on GCP using Spinnaker.
Any help please
Thanks,
Amol
I'm not sure Spinnaker was meant for doing what the second point in your list. Spinnaker assumes a collection of resources (VM's or a Kubernetes cluster) and then works with that. So instead of spinning up a new GKE cluster Spinnaker makes use of existing clusters. I think it'd be better (for you costs as well ;) if you seperate the environments using Kubernetes namespaces.

Kubernetes master high availability or replication configuration

Hi all we are looking for practically and tested guide or reference for kubernetes master high availability or other solution for master node fail over.
There are definitely folks running Kubernetes HA masters in production following the instructions for High Availability Kubernetes Clusters. As noted at the beginning of that page, it's an advanced use case and requires in-depth knowledge of how the Kubernetes master components work.