Can Opereto be installed on any cloud-native environment? - automation

I saw that Opereto can be installed on a single node using docker-compose. However, I would like to scale by installing Opereto on Kubernetes. Is it supported as well?
Thanks

Opereto is now released in two delivery methods: docker-compose for a small footprint single node installation and Kubernetes cluster.
https://docs.opereto.com/installation-get-started/
You can install Opereto on any environment that supports Kubernetes vanilla. There might be some differences in the deployment commands if you use OC command instead of kubectl but it should be straight forward to work it out.
Please note, however, that Opereto requires an HTTPs ingress to be configured. Ingress configuration may be different from one K8s provided to another.

Related

Traefik ingress route is not accessible

I have setup the default traefik dashboard example although this is not being exposed. Using kubectl port-forward works. The only thing that comes to mind is that I am using flannel as CNI and my previous k8s cluster instance I was using calico. It is weird though as I have other services exposed from the cluster outside of traefik that are working fine.
Not a proper answer but it seems that Flannel CNI was causing reachability issues. It was either a misconfiguration from my end or some fine tuning the tool was needed. I just replaced it with Calico and ingresses work fine!

I'm using a kuberctl(on my computer) to use the minikuber(on the other server), but how to start it?

The OS: Ubuntu 16.04;
On my computer: need a kuberctl to use the minikube,
On the server: a minikube on it and running already,
The net: both my computer and server are in one net, route is OK,
Question: I have downloaded a kuberctl binary tag.gz already, how to start and config my kuberctl?
I've unzipped the kuberctl binary tag.gz, but I don't know what to do next...
In fact, no code by now.
I hope to config the kuberctl and start it; the documentation seems help nothing.
Please follow instructions described in the documentation. You need to specify connection properties for your kubectl binary using a configuration file. To do that you'll need to know the Minikube configuration on your server.
Make sure that the minikube cluster is accepting connections from external addresses and there are no firewall rules blocking the connection.
After that, you should be able to use kubectl on your local computer to control that minikube cluster.
Also, it's kubectl and not kuberctl.

I just want to run this 4 containers : Portainer, odoo, postgres, traefik but traefik does not work

Portainer is an amazing tool that enables anyone to work with containers.
I can install Portainer and with it install odoo and postgres and it runs fine.
The next step is to install a proxy.
Traefik would do what I need. I can redirect multiple odoo instances to port 443
The problem is that I install traefic official image using portainer but it does not work.
Many people would like to install traefik with portainer but this case is not documented and does not work.
Very frustrating.
I had been through this. The way i solve this is using Nginx-Proxy-Manager instead of Traefik. To me, it simply make more sense to have precise control over routes(home lab server etc) in easy to manage visual way.

Kubernetes API for provisioning pods-as-a-service?

Currently I have an app (myapp) that deploys as a Java web app running on top of a "raw" (Ubuntu) VM. In production there are essentially 5 - 10 VMs running at any given time, all load balanced behind an nginx load balancer. Each VM is managed by Chef, which injects the correct env vars and provides the app with runtime arguments that make sense for production. So again: load balancing via nginx and configuration via Chef.
I am now interested in containerizing my future workloads, and porting this app over to Docker/Kubernetes. I'm trying to see what features Kubernetes offers that could replace my app's dependency on nginx and Chef.
So my concerns:
Does Kube-Proxy (or any other Kubernetes tools) provide subdomains or otherwise-loadbalanced URLs that could load balance to any number of pod replicas. In other words, if I "push" my newly-containerized app/image to Kubernetes API, is there a way for Kubernetes to make image available as, say, 10 pod replicas all load balanced behind myapp.example.com? If not what integration between Kubernetes and networking software (DNS/DHCP) is available?
Does Kubernetes (say, perhas via etc?) offer any sort of key-value basec configuration? It would be nice to send a command to Kubernetes API and give it labels like myapp:nonprod or myapp:prod and have Kubernetes "inject" the correct KV pairs into the running containers. For instance perhaps in the "nonprod" environment, the app connects to a MySQL database named mydb-nonprod.example.com, but in prod it connects to an RDS cluster. Or something.
Does Kubernetes offer service registry like features that could replace Consul/ZooKeeper?
Answers:
1) DNS subdomains in Kubernetes:
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns
Additionally, each Service loadbalancer gets a static IP address, so you can also program other DNS names if you want to target that IP address.
2) Key/Value pairs
At creation time you can inject arbitrary key/value environment variables and then use those in your scripts/config. e.g. you could connect to ${DB_HOST}
Though for your concrete example, we suggest using Namespaces (http://kubernetes.io/v1.0/docs/admin/namespaces/README.html) you can have a "prod" namespace and a "dev" namespace, and the DNS names of services resolve within those namespaces (e.g. mysql.prod.cluster.internal and mysql.dev.cluster.internal)
3) Yes, this is what the DNS and Service object provide (http://kubernetes.io/v1.0/docs/user-guide/walkthrough/k8s201.html#services)

Openshift .kubeconfig file and certificate authentication

I have been messing around with openshift and reading as much documentation as i can. Yet, the authentication performed by default(using admin .kubeconfig) puzzles me.
1)Are client-certificate-data and client-key-data the same as the admin certificate and key? I ask this because the contents of the certificate/key files are not the same as in .kubeconfig.
2).kubeconfig (AFAIK) is used to authenticate agains a kubernetes master. Yet, in OpenShift we are authentication against OpenShift master (right?). Why using .kubeconfig?
Kinds regards and thank you for your patience.
OpenShift builds on top of Kubernetes - it exposes both the OpenShift APIs (builds, deployments, images, projects) and the Kubernetes APIs (pods, replication controllers, services). A client connecting to OpenShift will use both sets of APIs. OpenShift can run on top of an existing Kubernetes cluster, in which case it will proxy API calls to the Kubernetes master and then apply security policy on top (via the OpenShift policy engine which may eventually become part of Kube).
So, the client is really an extension of Kubectl that offers some additional functionality, and it can use .kubeconfig to be consistent with a Kubectl setup. You can talk to an OpenShift cluster via kubectl, so vice versa seems fair.
The client-certificate-data and key-data are base64 encoded versions of the files on disk. They should be the same once you decode them. We do that so the .kubeconfig can be shipped around as one unit, but you can also set it up to reference files on disk.