How to extract environment variables in Rancher automatically - automation

First of all, sorry if this thread is not appropiated in Stack Overflow, but I think that is the best place of all.
We are using Rancher to manage a microservices solution. Most of the containers are NodeJS + Express apps, but there are others like Mongo or Identity Server.
We use many environment variables like endpoints or environment constants and, when we upgrade some of the containers individually, we forget to include them (most of the times, the person who deploys an upgrade is not the person who made the new version).
So, we're looking a way to manage them. We know that using a Dockerfile could be the best way, but if we need to upgrade just one container, we think that is too many work for just a minor change.
TLDR; How do you manage your enviromental variables in Rancher? How do you document them or how you extract them automatically?
Thanks!

Applications in Rancher are generally managed using Stacks/Services. Dockerfile is used to build a container image. docker-compose/rancher-compose files are used to define the applications. The environment variables can be specified in docker-compose file.
When you upgrade a service in rancher, the environment variables information is carried forward and also it's possible to edit them before upgrade.
Also Rancher "Catalog" feature might be something useful for you. Checkout: https://rancher.com/docs/rancher/v1.6/en/catalog/

Related

Dockerized Gitlab Container Backup

I am using a GitLab docker image for integration testing of a service I'm helping to develop. Ideally, the image would be a preconfigured snapshot of GitLab with different users and repos available to run tests against. So the problem ends up being, what is a good way to automate the creation of 'snapshots' of GitLab (that can then be versioned etc.)?
My current solution to this problem is to use GitLab's built in backup utility via gitlab-rake gitlab:backup:create after getting GitLab to a state that I want. This then lets me use GitLab's gitlab-rake gitlab:backup:restore in a hook when the container is starting up to get the container back to the state that I expect (the backup having been ADDed in the Dockerfile for the image). This has the advantage of being relatively lightweight (backups are on the order of MBs) and the backups can be checked in to version control.
I have tried using docker export along with docker import to save the state of the container and then create an image based on that state. This has the advantage of being easy to automate since it is directly supported by Docker, but ends up being fairly expensive considering what the goal is (having users and repos available to test against). It also would require the images to be pushed to a registry of some kind in order to be easily distributed. Perhaps this is the best solution because it is well supported though.
I suppose my question is, what is the Docker way of approaching a problem like this?

Deploying multiple/single instances of a website

We have an ASP.NET website that we want to deploy (and remove) multiple instances of the site on the same IIS machine.
We also have a few number of customers that need to install the product on their system.
I was hoping WIX would be able to handle this, but it appears you can only have one instance installed at a time.
What options are available to me? Right now we use FinalBuilder to setup a generic "install package" which uses a batch file that a user populated with their environment settings, and uses tools like sed and awk to update config files and more scripts to deploy to IIS.
It works, but it's very cumbersome. I was hoping to find more of a GUI/command line interface to replace this process.
It sounds like MSDeploy will work for your use case. It can deploy multiple instances to the same IIS instance and can also delete instances.
The following post is specifically about service versioning but you could use the same technique to install several instances of a web app.
http://www.dotnetcatch.com/2016/03/03/simple-service-versioning-with-webdeploy/

Is there a way to tell kubernetes to update your containers?

I have a kubernetes cluster, and I am wondering how (best practice) to update containers. I know the idea is to tear down the old containers and put up new ones, but is there a one-liner I can use, do I have to remove the replication controller or pod(s) and then spin up new ones (pods or replicaiton controllers)? With this I am using a self hosted private library that I know I have to build from the Dockerfile and the push to anyway, this I can automate with gulp (or any other build tool), can I automate kubernetes update/tear down and up?
Kubectl can automate the process of rolling updates for you. Check out the docs here:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl_rolling-update.md
A rolling update of an existing replication controller foo running Docker image bar:1.0 to image bar:2.0 can be as simple as running
kubectl rolling-update foo --image=bar:2.0.
Found where in the Kubernetes docs they mention updates: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/replication-controller.md#rolling-updates. Wish it was more automated, but it works.
EDIT 1
For automating this, I've found https://www.npmjs.com/package/node-kubernetes-client which since I already automate the rest of my build and deployment with a node process, this will work really well.
The OpenShift Origin project (https://github.com/openshift/origin) runs an embedded Kubernetes cluster, but provides automated build and deployment workflows on top of your cluster if you do not want to roll your own solution.
I recommend looking at the example here:
https://github.com/openshift/origin/tree/master/examples/sample-app
It's possible some of the build and deployment hooks may move upstream into the Kubernetes project in the future, but this would serve as a good example of how deployment solutions can be built on top of Kubernetes.

What is a difference between vcenter's template and virtual machine?

As in the topic.
I wonder since I cannot find this information anywhere and currently I am using a virtual machine (linux) on my vcenter which is cloned and then a special shell script is run on this freshly cloned machine to setup up environment and IP adresses etc.
Maybe I would be able to benefit from templates this way.
I think this will be helpful
https://www.robertparten.com/virtualization/vmware-difference-between-clone-and-template/
Few Differences in my opinion:-
Virtual machine is the running instance while Template is compact copy of VM ( with baseline and factory settings), which can be stored anywhere.
one need to deploy template to make running VM.
one can create copy from both VM and template but in VM you need to clone it and in case of template you need to deploy it.
moving between different setup is easy with template.
Rest are already mentioned in link provided.
But first you need to search on your own and still have doubts than only ask, that's how we all learn.
Happy Learning!
Looking at these two scenarios:
Create a template from your active VM, then deploy from the template.
Deploy from the active VM directly.
As far as I know, there will be no difference in the end result if you run these scenarios in the near future. You'll still have to run a script in order to get your IPs setup, etc.
So what's the difference?
If you mess stuff up with your active VM, change things around or whatever, you lose the ability to deploy from the (good) setup you had.
Once you make a template from your active VM, that configuration is saved as a file on the ESX (or the storage, not 100% sure) and can be re-deployed in the future.

how to minimize JBOSS AS 7 configuration to fit needs

i need to be able to configure JBOSS as 7.2 to only startup the services required in the project.
what is the best aproach to customize JBOSS as 7.2 configuration and set it to a user defined configuration ?
i'm intending to use :
JAAS , EJB , JSF ..
If I understand your request correctly,
it would be removing the unnecessary subsystems.
In standalone, you can configure multiple xmls per your use, and decide the standalone config to use at startup, which would allow you to adapt to various needs quickly.
In domain, it's even easier as you can define various profiles, each of them with specific subsystems present or not, and then assign the necessary profile to your server group.
Removing subsystems through xml modification or CLI is simple,
adding them through CLI sometimes require figuring some of the "default" expected entries for recreation of the subsystems, but once figured out, is easy also.
Key element in your case is to make sure you do not clean up too many of the subsystems, as they sometimes have dependencies to one another.